| TIP: Click on subject to list as thread! | ANSI |
| echo: | |
|---|---|
| to: | |
| from: | |
| date: | |
| subject: | more power |
Hi Denis,
30 Dec 98, Denis Tonn of 1:153/908 wrote to Eddy Thilleman:
ET> Memory = virtual memory?
DT> Yes, but most programmers can ignore the distinction.
Distinction? Where is the distinction in 'something-what-is-equal-to-another-thing' ?
DT> To be honest I can't tell for sure if you mean something
DT> subtly different in the above..
Different from what you have said ? I meant the same, only put in other
words to check if I understand it. I process new information by re-ordering
details and attach (associate) them to information I already knew to
understand the new information and link it to information already existing
in my brain so that I easily can store the new information in my long-term
memory (the brain is associative).
DT> Not all addresses will be "valid" in a process (holes in the
DT> valid address range).
OK, I understand this.
DT> Each process will have a different set of "holes" and a
DT> different set of RAM pages backing these virtual addresses
DT> (different data/code).
What has this to do with the CPU-limit of 4GB virtual memory addresses ?
Everything (including the holes) has to be within this limit. If this limit
is lifted to say 10GB than the CPU can address more virtual memory and (I
think) without any changes to the kernel code OS/2 will take advantage of
the greater virtual address space? If so, then this limit is transparent to
the operating system... But to do this, this CPU-limit has to be obtained
from the CPU and I don't know if that's possible, otherwise this limit has
to be hardcoded in the kernel code so the kernel knows the limit? The
latter implies that a new os-version is needed to take advantage of a
greater virtual address space (if and when that occurs), but what about
older CPU's that still has the 4GB limit, can't the new os-version run on
the CPU's with still the 4GB limit?
[system address space]
DT> The kernel has 4GB selectors. It can "see" the whole system,
OK.
DT> The "system arena" is mapped across all processes and these
DT> addresses will be the same in all process contexts.
The system arena is the same to all processes? I don't know, but do you
mean by "mapped across all processes" pointers to this system
area ?
DT> It is only reachable with a selector that has a "large" 4GB
DT> limit (kernel).
Only on ring-0 ?
DT> The "system address space" is the system arena plus the
DT> context of the current (active) process.
Is this the reason why the system arena is "mapped across all processes" ?
DT> Since nobody except IBM was really using this area, it was
I see.
DT> Arbitrary number. It is faster to do context switching by
OK.
DT> direct copy of page directory entries from the PTDA control
PTDA = Page Table Directory Area ?
DT> block if the app uses less than 64MB of private address space
So if the app uses more than 64MB of private address space than it's not
faster (maybe slower) but because very few app's needs or uses 64 MB of
private address space or more this is a good trade-off.
DT> (16 directory entries maps 64MB of RAM pages). It is a
DT> tradeoff. They could have stored the whole page directory in
DT> the PTDA, but that would make the PTDA that much larger (64
DT> bytes vs 4K bytes of the page directory).
so 16 directory entries take 64 bytes ?
DT> See below as to why I place "private" in quotes. Different
DT> processes that start with the same executable will share read
DT> only pages.
So private to one exe file, but shared across processes who share the same
physical exe file which started those processes?
DT> pages at a time). This can have a domino effect, since a DLL
DT> can reference another DLL. The performance impact would be
DT> considerable..
I see.
DT>> Now, there is a concept of "instance data" allocated in the
DT>> "shared address arena".
DT> Yep.. But don't overuse/misuse it. As I recommended, it is
What are the drawbacks when you do? From your reply here, it seems to me
that "instance data" and/or "shared address arena" is
limited in size?
DT> It's only real "use" is to decrease exhaustion of addresses in
DT> the shared arena and excessive use of RAM (even a 16 byte DLL
DT> takes up a whole 4K page otherwise). Multiple small 16 DLLs
DT> are "packed" onto a single page (and 64K allocation).
This packing, is this done by loading each DLL in its own real memory
address (only known to the kernel) and the kernel maps that for each
process to the same location ? Has all those DLL's to fit together in a
single 4K page or in a 64K segment, and if so which one? In one 64K ?
DT> The base address in the LDT is NOT on a 64K boundry for these
Why not? Is there some table at the 64K boundry to convert the addresses
for the process?
DT> DLL's (requiring a different technique to convert an address).
Costs this more CPU overhead?
DT> process. Two copies of PMSHELL.EXE will have the same "name",
DT> but different PIDs.
I see.
DT> "optimize" the RAM usage by mapping the same RAM page into
DT> both processes. In effect this is "shared memory" in the
DT> private address range (which is why I placed private in
DT> quotes above).
I see.
DT> processes will have acceess to all shared memory, only the
DT> ones that obtain (via API calls) access to the particular
DT> "shared" module or data.
Leaving holes for the processes which don't have obtained access to those
"shared" memory?
DT> it determines if it can "share" these pages is via the full
DT> drive/path/filename (filespec) of the EXE being loaded into
I see.
DT> The debugger may need to set breakpoints in the code (thus
DT> breaking the read only requirement),
So it cannot be shared?
DT> An app address range of 3GB leaves only 1GB for the
I see, nowadays most don't need this amount of memory but that may come
(like in the old days of CPM and the beginning of DOS 64KB was thought to
be enough, then 640KB, then a few MB's, then even more MB's and more and
more to now 64 MB is normal and 128 MB is not uncommon anymore, sheesh will
this ever stop?)
DT> This can/will reduce the total amount of processes that can be
DT> running at the same time.
This is also a trade-off?
DT> There are some architectural changes that can be implemented
DT> to reduce this effect, but they have not been made (yet).
Affect those architectural changes existing programs or prevent them to run?
DT>> The memory above the 512MB line has a similar organization
DT>> into "private" and "shared" regions as the
memory below the
DT>> 512MB line.
DT> In the essences of the discussion so far they are the same.
One thing that comes to mind is that memory above the 512MB does not have
to be shared with 16-bit code so this memory can be optimized for 32-bit
code, for example allocations in this memory does not have to start at a
64KB boundry, allowing more efficient memory usage? This makes only a
difference if memory allocations below the 512 MB (which always start at a
64KB boundry) have the sole use of that 64KB segment and thus occupying a
whole 64 KB segment even if doesn't need the whole 64 KB segment?
DT> There isn't a fixed value for "guaranteed" high private and
DT> high shared arenas (1/8 of the himem area each).
Because this memory doesn't have to be shared with 16-bit code?
DT> There is an INF of a debugging "handbook" that covers this
DT> (and a lot more topics) available at the OS/2 developers
DT> site. Look for a file name of SG244640.ZIP It does not cover
DT> any of the "himem" information though..
I will see if I can find it, but not soon (I'll look for it when I really
start to delve in OS/2).
Cheers -=Eddy=- (eddy.thilleman{at}net.hcc.nl)
... * <- Tribble ! <- Tribble squashed by Doc Martens
--- MBM v4.14
* Origin: Speedy Gonsalez (2:500/143.7)SEEN-BY: 396/1 632/0 371 633/260 262 267 270 371 635/444 506 728 639/252 SEEN-BY: 670/218 @PATH: 500/143 280/4 0 801 270/101 396/1 633/260 635/506 728 633/267 |
|
| SOURCE: echomail via fidonet.ozzmosis.com | |
Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.