Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Path: utzoo!mnetor!seismo!lll-crg!nike!think!mit-eddie!genrad!decvax!bellcore!ulysses!mhuxr!mhuxt!houxm!ho95e!wcs From: wcs@ho95e.UUCP (#Bill_Stewart) Newsgroups: net.news,net.micro.mac,net.sources.d Subject: Re: Backbone automatic news-compression question ... Message-ID: Date: Sun, 21-Sep-86 01:34:34 EDT Article-I.D.: ho95e.857 Posted: Sun Sep 21 01:34:34 1986 Date-Received: Tue, 23-Sep-86 07:48:42 EDT References: Reply-To: wcs@ho95e.UUCP (Bill Stewart 1-201-949-0705 ihnp4!ho95c!wcs HO 2G202) Organization: AT&T Bell Labs, Holmdel NJ Lines: 38 Keywords: question [regarding] compression [of] news [for] transmission Xref: mnetor net.news:2043 net.micro.mac:7140 net.sources.d:528 In article werner@ut-ngp.UUCP (Werner Uhrig) writes: 64K before compression/uuencoding, and significantly smaller afterwards. This allowed it to survive braindamaged mailers, at the expense of being more work to read, and the redundancy added by uuencode was probably exploited by compress on the backbone. On the subject of compression for transmission, if a backbone site is transmitting a given message to 24 other sites, does it compress each outgoing message once, or does it do it 24 times? This is obviously moot on a broadcast network (like Stargate, or ihnp4->rest of IH), but could have a major CPU impact on an overloaded machine like ucbvax, ihnp4, or allegra. If I were designing "C News", I'd consider storing all articles in compressed form, with special pre-defined tokens for the common header items; this would have major disk space and transmission savings, and would probably have low CPU impact for news reading programs. -- # Bill Stewart, AT&T Bell Labs 2G-202, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs