| TIP: Click on subject to list as thread! | ANSI |
| echo: | |
|---|---|
| to: | |
| from: | |
| date: | |
| subject: | Wanted DB2 or Database Programmer for OS/2 |
MB> By the way, there is no easy way out of this 2 GB file size
MB> limit on a PC. Most implementations of Unix have the same
MB> limitation.
SL> NT did it right.
Not exactly. Implementing the method used for this in NT would be fairly
easy in OS/2. The problem goes much further than that. If you are
programming NT in C, for example, you are still stuck with the problem that
basic file manipulation functions such as fseek() and lseek() take long
arguments. Much worse is that functions such as ftell() and tell() return
long results, so that the ANSI semantics can break down. New 64-bit API
calls could be added to OS/2 for support of large file access, but it is
not clear how to make them compatible with standard development tools.
There are a couple of approaches available for fixing this, but one
important concern is that you do not want to break working code. On the
other hand, you would also like to have legacy code be able to access files
larger than 2 GB without making any changes to them at all.
One method would be to make position information subject to a scaling
factor by adding an OS/2 API function such as
"DosFileSetScale()." If a call is made against a file handle to
set the scaling factor to 2, then 4 GB files could be accessed, but
positioned directly only on even byte boundaries. This has a very nice
benefit in that file positioning calls become transparent inside the C
library, although code above them would have to account for it. Only new
applications that knew how to handle the scaling factor would call the new
API function, and everything would work with existing compiler libraries.
The main disadvantage is that legacy code would still be subject to the 2
GB limit.
Another method is to use file segmentation, where 64-bit file pointers are
treated like 32:32 pointers. Each 2 GB file segment would be accessible
separately using the old API and one new API call that set the segment. A
serious disadvantage is that crossing segment boundaries would be annoying.
The advantages and disadvantages are much the same as with 16:16 memory
pointers.
A third approach would be to disable the old file positioning API on large
files. This would allow some legacy applications to work, as long as they
never made explicit calls to ftell(), tell(), fgetpos(), and so on. For
example, it would be possible for an old application to read an entire 4 GB
file with successive calls to fread(). While old applications would have
some ability to access large files, it might be so limited as to be
useless. Also, if a C library is written so that fread() internally
depends ftell(), for example, then this whole scheme will fail.
NT is intended as an entirely new operating system (despite Microsoft's
original announcement of it as "OS/2 3.0"), and it has the
dubious advantage of having no legacy applications at all. NT runs DOS
applications, but there are no mechanisms at all for handling the kinds of
problems I describe.
My opinion about the best way to handle this in OS/2 is to simply throw in
the towel and stop old applications from accessing large files. This could
be handled by defining a LFSS ("Large File Size Support") flag in
the EXE comparable to the LFNS ("Long File Name Support") flag
introduced in OS/2 1.2. Applications without this flag would be prevented
from seeing large files, although tools like IBM's EXEHDR could be used to
set the flag on old applications for those who wanted to live dangerously.
Then, new API calls should be defined for 64-bit file pointers and for
setting a scaling factor applicable to 32-bit file pointers. This would
provide a transition during which existing development tools and C
compilers could be used, although new tools would be migrated to the 64-bit
file API. Of course, applications could bypass the compiler library and go
directly to the API, much as is commonly done in OS/2 with memory
management.
Of course, changes would have to be made in the Installable File System
specification, but this is actually fairly easy to do. The real problems
would be within the file systems themselves, especially since HPFS is owned
by Microsoft, not IBM. HPFS would have to go, but it really should have
been gone years ago. It was hot stuff when it was introduced, but HPFS was
designed for the 286 and is aging badly in comparison even with such modern
developments as the Linux Second Extended file system, which someone
someday will probably port to OS/2.
-- Mike
--- Maximus/2 2.02
* Origin: N1BEE BBS +1 401 944 8498 V.34/V.FC/V.32bis/HST16.8 (1:323/107)SEEN-BY: 12/2442 620/243 624/50 632/348 640/820 690/660 711/409 410 413 430 SEEN-BY: 711/807 808 809 934 942 949 712/353 515 713/888 800/1 7877/2809 @PATH: 323/107 150 3615/50 229/2 12/2442 711/409 808 809 934 |
|
| SOURCE: echomail via fidonet.ozzmosis.com | |
Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.