On 2019 Mar 16 20:30:00, you wrote to me:
TL> -=> On 03-16-19 04:27, mark lewis wrote to Tony Langdon <=-
ml>> at one time, my makenl was configured to allow operators to submit
ml>> their own individual nodelist entries... it worked quite well when the
ml>> proper format and data was submitted...
TL> Which then relies too much on human input. Wouldn't be the first time
I've
TL> typod something. I only discovered half of my nodelist was broken
yup, depending on the typo, makenl or another nodelist processor might have
found it... we've all seen what happens when a broken segment gets included and
entire nets and regions are dropped out of the nodelist...
TL> when I started generating DNS RRs for it, and finding that half of the
TL> entries didn't convert.
i can't say that i've run into that in a similar situation where i was
generating f.n.z.my.domain stuff for my old emailFTN gating stuff but i did
have to really work on my regexes ;)
TL> I don't see why in 2019 we need to be relying so heavily on humans to
TL> validate syntax that can be done by machine. Let the machine do that
TL> and let the people look at the _content_.
makenl and similar nodelist processors are supposed to be used to validate the
format... humans have been checking the content... that's been SOP for eons...
ml>> already done and handled since eons... just no web interface where
ml>> you have to employ more security than necessary to prevent bots and
ml>> humans from attacking, changing others' entries, submitting
ml>> invalid/false data, etc... a NC/HUB should know who is in their
ml>> segment and not just rubber stamp what is sent to them for
ml>> processing... the RCs/ZCs have to have some trust in their NCs but
ml>> they should also still check the segments they generate to send
ml>> upstream...
TL> Good points re security. I still think some machine validation of
TL> syntax on entry/generation would be helpful. Obviously, only a human
TL> can verify that the contents of the nodelist entries are actually
TL> correct.
very true... the problem would come when bots spew proper format garbage into
the list which a human then has to remove before processing with makenl...
ml>> how do you know if/when invalid data gets in? i'm speaking of data
ml>> that passes the tests but is still invalid/incorrect...
TL> ATM, it's a one man show, so same way you do - manually. ;)
i freely admit that it was a long time before i looked at makenl and
implemented it in my setup... i was doing the whole thing manually before
then... manually editing the segment and then manually attaching it to a
netmail to my upstream coordinator... when i added makenl, things got a lot
easier but one still has to manually edit the segment in a plain ASCII text
editor... edlin was used over here for a long time OB-)
these days, the nets in my region send their segments in, makenl finds them in
the inbound and moves them when the testing function is executed... if the
testing and manual review passes, the NCs are notified that their segment has
been processed and accepted when the process function is executed... makenl
generates the file attach netmail and puts it in the netmail directory where
the mail tosser then processes it and exports it so the mailer can handle it...
i'm not saying that makenl will catch everything, though... there are only a
few required fields so at least X number of commas are required... however, the
last comma separated fields are not mandatory so they may be missed... i've
seen flags joined into one because a comma was missed and that's where the
human comes into play but it is easy for those to be missed, too... it isn't
perfect and the format could stand some updating so the flag fields are denoted
in a better manner but that would break a lot of existing software so we keep
on doing what it takes to make things work...
)\/(ark
Always Mount a Scratch Monkey
Do you manage your own servers? If you are not running an IDS/IPS yer doin' it
wrong...
... BEWARE - Tagline Thief in this echo
---
* Origin: (1:3634/12.73)
|