TIP: Click on subject to list as thread! ANSI
echo: os2prog
to: Peter Fitzsimmons
from: Craig Swanson
date: 1995-11-01 12:20:12
subject: My software somehow breaks DosReset

You've probably seen this already on Usenet, but I'm reposting it here in
case anybody else has run into strange named pipes problems and/or knows
more about DosShutdown( 1 ).

============

* Original Area: comp.os.os2.programmer.misc
* Original To  : Peter Fitzsimmons

 PF> From: Peter.Fitzsimmons{at}x259_414.gryn.org (Peter Fitzsimmons)

 o> What appears to be happening is that the named
 o> pipes code in OS/2 will in
 o> blocking mode violate the documented behavior by
 o> writing only some of the
 o> requested number of bytes to be written.  When it
 o> does this in message


 PF>  I'll bet $10 this is because the message that fails is crossing a 64k
 PF>  boundary.  This is mentioned in the warp docs for DosWrite.

OK, I'll bet since I already know this isn't the case this time.  Was that
$10 US or Canadian?  :-)

Actually, your suggestion is a really good one and in fact is a common
cause of this type of behavior as I've seen in the past.  Do you remember a
discussion about this we had back around February in Fido OS2PROG? You and
I discussed this, you had a test program to show the problem,and I used
your test program to verify this is what was happening in my program.  I
was able to fix all the 64KB boundary problems in my code, but there was
still something going wrong that happened much more infrequently that was
causing named pipes to seriously malfunction.  This problem got really bad
starting a couple of weeks ago after adding some new functionality in which
the client was much slower at reading data from the named pipes than the
server was at putting the data into the named pipe.  I couldn't put off
fixing it any more.
 Fortunately, I finally figured out what it was two days ago and will
explain more at the end of this message.

 PF>  A quick test for this is to wrap all your named pipe reads and writes
 PF>  around a function that stages each read/write into a staic (tiled)
 PF>  buffer and them memcpy'ing it to the real buffer.

I did something like this, only I changed it so that the tiled buffer and
memory copy would only be done if a 64KB boundary was crossed. I printed
out the buffer start and end address to a log file so I could be sure this
was working OK, and it was.

 PF> From the docs: " Note:  When writing message pipes the application is
 PF>  limited to 64K messages. As well, these messages cannot span 64k
 PF>  boundaries due to the current design of the thunk layer in read or
 PF>  write routines. If the message is not written in an aligned manner, the
 PF>  subsequent read will not be able to handle the messages properly. If a
 PF>  64k or less message is written to a pipe from an aligned buffer, the
 PF>  read will handle this properly."

 PF> What a cop out -- It's a BUG!

Yes, I agree, this does seem to me like a bug, also.  It's documented
now,but when I originally started writing some of this code 64KB boundary
problem was not documented in the on-line references I had (OS/2 2.0
tookit, I think) so that is part of the reason why I ran into this problem
in the first place.
 I hope OS/2 for PowerPC doesn't preserve this behavior.

My speculation as to why IBM left it this way is that named pipes only
support messages up to 64KB in length, anyway.  The header put into the
pipe that contains the message length is only 2 bytes long, so there
probably isn't a way to increase the message size limit without breaking
compatibility of named pipes between OS/2 releases and between the various
operating systems that support named pipes. So perhaps they figured that if
the programmers using named pipes would have to deal with 64KB limits
anyway that it's not so bad to leave the boundary problem in.  This does
seem like pretending its not a bug by documenting it as expected behavior
when it really is a bug.

Anyway, as to the cause of the many problems I've had with named pipes
malfunctioning that couldn't be explained by 64KB boundary problems,it
turns out that about 1.5 years ago I had added some code to shut down parts
of this application.  As the application runs on a notebook computer and we
were at the time discussing using sleep or suspend mode to prevent having
to reboot the computers as often, we wanted to be sure that any cached data
in the file system buffers was written to disk before shutting down.  I
recalled reading an article that mentioned the DosShutdown( 1 ) call that
had been added for this reason -- well, at least that is what I recall the
reason was, maybe my memory is inaccurate as I can't remember where I read
this.  The Control Program Reference makes it sound like this call is
designed to flush all caches to disk but not shut down the file systems and
leave the system in a state in which it can be used again.

It turned out we ended up having to use docking stations and couldn't use
sleep mode anyway, but this partially implemented code was left in the
program as there wasn't much of it as it hadn't been finished and by that
time I had pretty much forgotten about it,anyway.  But it turns out that
nearly all of the problems I have been having with named pipes are due to
this DosShutdown( 1 ) call.  This call runs, returns NO_ERROR, and after
that DosResetBuffer() no longer works correctly and named pipes message
writes will fragment messages. This problem persists until the computer is
rebooted.
 So the obvious answer is, remove the call to DosShutdown( 1 ).  I did
that,and all of the weird problems disappeared.

I also verified this by writing a little C program that does just one thing --
 calls DosShutdown( 1 ).  Then I started up a named pipe server and a
client and they exchanged messages with DosResetBuffer() immediately after
the DosWrite() of messages into the pipe so as to keep them in sync.  Then
I run the DosShutdown( 1 ) program and the named pipe server immediately
writes lots of messages into the named pipe and fills it up and then writes
a partial message. Only then does the client start reading messages from
the pipe,and after it gets to the partial message basically anything beyond
that is trash.

As far as I can determine, everything else on the system runs correctly
after DosShutdown( 1 ) except for named pipes and possibly anonymous pipes.
 I'm guessing that anonymous pipes have problems,also, because I've been
the only one to report strange problems with WF/2 truncating compiler error
messages (which I think are sent through stdout redirected through an
anonymous pipe) that start after I've been using the computer for some time
and won't go away without a reboot.

This leaves some questions open as to how DosShutdown( 1 ) is supposed to
be used and why it is documented in the Control Program Reference to sound
like it does put the file systems in a state in which no data would be lost
if power was lost.  At the moment, I don't know what the answers to these
questions are.  But I've been wondering if some APM code in OS/2 uses
DosShutdown( 1 ) before shutting down the hard disk, and if so perhaps a
lot of OS/2 notebook and laptop computer systems could be in a state where
named pipes are not reliable but nobody has noticed because named pipes are
so often used for networking but laptops are often not on a network and
moreover because a lot of the problems only surface when putting large
amount of data through the pipes that can fill up the pipe buffers.

Anyhow, if you know of anybody who knows more about DosShutdown( 1 ) maybe
you could ask them what the purpose of it is and if it is used by any OS/2
code shipped by IBM.


--- Maximus/2 2.02
* Origin: OS/2 Connection {at} Mira Mesa, CA (1:202/354)
SEEN-BY: 270/101 620/243 711/401 409 410 413 430 807 808 809 934 955 712/407
SEEN-BY: 712/515 517 628 713/888 800/1 7877/2809
@PATH: 202/354 300 777 3615/50 396/1 270/101 712/515 711/808 809 934

SOURCE: echomail via fidonet.ozzmosis.com

Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.