On Wed, 12 Dec 2018 08:29:31 +0000, Jan Panteltje wrote:
> Many (160 people IIRC) were killed when the airspeed sensor failed an
> indicated a too low airspeed. The poor pilots did not know how to stop
> that computer.. had no root access.
>
It may or may not have had in-cockpit controls - that hasn't come to
light yet. Whats we know about this crash so far is that:
- the autotrim system was not mentioned in the manuals for this 373
version and it was the first 737 to have this feature.
- the pilots had received no training about it at all and so did not know
how it worked or how to disable it: a direct result from the lack of
documentation.
- 737 autotrim has a major bug: it gets confused if the dual AOA sensors
disagree. In this case one had failed so of course they disagreed.
- the crew for the previous flight had a less serious occurrence of the
same problem, which they overcame, but didn't tell anybody about it.
> We are moving towards a situation where artificial neural nets are going
> to run everything, in the medical world those already make diagnoses,
> in traffic those steer cars, in the military those are in autonomous
> weapons. Nobody can pull the plug so to speak [1].
>
And equally damning, nobody understands how a neural net actually
recognises the situation/image/sound/whatever and worse still, there is
not the slightest possibility that a neural net will ever be able to
explain why it made a decision.
Please don't call these things AI - they are no more AI than the cascaded
decision tables (that fuelled the early '80s AI bubble) were. They are
*not* any form of AI - merely trainable pattern recognition systems.
In my book it should be forbidden to call anything an AI unless it can
output an understandable report showing how it came to make a decision
and act on it.
--
Martin | martin at
Gregorie | gregorie dot org
--- SoupGate-Win32 v1.05
* Origin: Agency HUB, Dunedin - New Zealand | FidoUsenet Gateway (3:770/3)
|