RSJ> Many did, many did not {some were right on the money, even with our
RSJ> more extensive testing}, but our test bed was entirely different than
RSJ> the one they published so naturaly our answers should be slightly
RSJ> different especially since we shot much more at the scanner, so our
RSJ> percentages were definitely different.
1. How did you verify that you have real viruses and not
corrupted samples? Did you replicate each sample yourself?
How many different viruses did you use in the test?
2. "we shot much more at the scanner" Can you clarify? How can
you shoot much more at a virus scanner?
RSJ> To describe our method we used multiple configurations of PC's
RSJ> everything from Pentium Pro with 64MBytes of memory to 286's with 1 MB
RSJ> memory, DOS 5.0 through 6.22, Windows 2.15 through 3.11 and NT 3.51
RSJ> through 4.0. We also shot 100,000 instances of various viruses at the
RSJ> scanners {Almost took us longer to infect the server than it did to
RSJ> run the tests. The servers were everything from 3.11 servers to 4.1.
You "shot 100,000" instances of various viruses??? What
viruses? How did you replcate them? How did you know the goat
(host) files were good and that the virus would infect them?
RSJ> Their answers are in percentages and are just numbers. how an
RSJ> individual infers these answers is that persons way of doing it. In
RSJ> _MY_ system environment their testbed is rather pathetic as I
RSJ> administrate 750+ Novell servers 500+ NT servers and 40,000+ PC's
RSJ> world wide. Hell I have more in my office than their entire test bed,
RSJ> but knowing what they used and how their testing worked helped.
Having different configurations of PC hardware does not make
your tests any more reliable.
Sincerely,
Keith A. Peer
... Central Command Inc. U.S. Distributor for AVP and HS
* Silver Xpress V4.4P [Reg]
--- InterEcho 1.19
---------------
* Origin: PC-Ohio PCBoard * Cleveland, OH * 216-381-3320 (1:157/200)
|