TIP: Click on subject to list as thread! ANSI
echo: rberrypi
to: MICHAEL J. MAHON
from: MARTIN GREGORIE
date: 2018-04-29 18:39:00
subject: Re: 64 bit OS

On Sun, 29 Apr 2018 13:03:25 -0500, Michael J. Mahon wrote:

> Testing has the same limits as programming, and is subject to similar
> bugs, both logical and clerical. Its advantage is that it is a kind of
> orthogonal “implementation” of the spec, and so less likely to contain
> similar bugs.
>
Two points:

- the test harness and list of tests *must* be written without reference
to the code being tested and, ideally, *should* not be written by the
author of the code to be tested, though in many cases the latter is not
going to happen. Writing tests from the spec is vital, especially as
doing it can smoke out errors and omissions in the spec.

- the test harness and tests should be written to report only deviations
from a set of expected results and *should* have the same lifetime as the
code they test - IOW each time the code is amended it should be
regression tested by rerunning the tests against it and fixing any
unexpected deviations from expected results, followed by adding any new
tests that the changes require and/or modifying the expected results.

All this is easier to do than it might appear:

- its quite easy to write a test harness that plays a set of scripted
  tests through a custom module that interfaces the harness to the code
  being tested.  The custom interface modules are little more than
  cut'n'paste exercises once you've written the first one.

- since the tests are scripted its very easy to add more tests for edge
  cases and to test normal operation with inductive methods.

- generating and modifying expected results is easy too, if the test
  harness lists test scripts as they are executed, the scripts contain
  comments about expected results, and the output contains actual
  results immediately after the scripted actions that cause them to be
  output. Under these conditions the expected results are merely the
  captured output from a clean run of a test script.

- determining whether a test was successful or not can be done with
  using 'diff' to compare this runs output with the expected results:
  if any differences are found, the test failed.

I've been using this rest method for the contents of both C and Java code
libraries for quite a long time now: it works well for both languages.


--
Martin    | martin at
Gregorie  | gregorie dot org

--- SoupGate-Win32 v1.05
* Origin: Agency HUB, Dunedin - New Zealand | FidoUsenet Gateway (3:770/3)

SOURCE: echomail via QWK@docsplace.org

Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.