| TIP: Click on subject to list as thread! | ANSI |
| echo: | |
|---|---|
| to: | |
| from: | |
| date: | |
| subject: | ATM Robo vs Couder Mask vs Hartman vs RoboC Battle Royal! |
From: "James Lerch" To: Cc: "ATM List" Reply-To: "James Lerch" Greetings All, This is going to be a 'little' long winded, if your not interested in all this Robo garbage, just hit that 'delete' key now :) (BTW, I'm writing this during a caffeine induced intoxication, so pls excuse my wandering thoughts!) If your still reading this, I'm afraid this post is actually going to need a table of contents, so here it goes: Chapter 1, General Robo Comments Chapter 2, Couder Mask vs Robo results Chapter 3, Robo Vs RoboC by Dale Easton Chapter 4, Robo Vs Hartmann by James Burrows Chapter 5, Robo Vs. Robo Blink test Chapter 6, Robo Vs. Field tests Chapter 7, Summation ** Chapter 1, General Robo Comments The original goal of Robo was to create a faster and less subjective mirror test for use in our Telescope Optic Class. Our ATM class currently has upwards of 15 students, all at various stages of Grinding, Polishing and Figuring. Over the last few years its not uncommon to have 5 or more students in a loop doing a few rounds of figuring, washing, drying, and testing. Prior to Robo this was a most unfavorable environment, with lots of eye strain and varying results, and very few tests being accomplished. With Robo, mirror testing has turned into a "Plop, point, and click" affair, with highly repeatable results given in less than 5 minutes (Accuracy is yet to be determined, more on that later) The most recent thread on Robo started as a result of obtaining different KE results based on image intensity, so let me speak to this for a moment. Originally Robo used a Slitless Foucault source. During this time my normal test was done with a bright input image source and very low 'Shade of Gray' value. My current hypothesis is this gave fairly accurate results. However, during a recent test it was noted that going to a darker input image, or Higher "shade of gray" gave a larger value for total KE readings, which was rather disturbing. My current hypothesis for why the above was noted is based on non-uniform illumination as a change in the "Virtual Slit Width" in the slitless test setup that results from a changing input image brightness or changing "Shades of Gray" Since my recent transition to a Slit source Foucault setup, changes in input image intensity or "Shades of Gray" have greatly reduced the change in KE readings. As Bob May pointed out, even in a slit source based test, one of the knife edges is still not being used when at a zone null. However its my opinion that the slit source greatly reduces non-uniform illumination, and therefore greatly increases the test accuracy. One more note on my current light source setup. I use a Sanded flat LED, which is pushed back in its holder about an 1/8" (the holder is a piece of 1/4" thick plywood with a hole drilled in it and a friction fit for the LED) With the LED pushed back in its hole an 1/8", I then covered the hole with a piece of 'frosty' cellophane tape (Scotch tape?). The knife edges are held flush to the face of the cellophane tape, with one knife edge being long enough to be used as both part of the slit and the knife edge for the camera. In essence my light source is the result of a 'Frosty' LED illuminating a 'Frosty' piece of tape. This new setup seems to give VERY uniform illumination, which appears to be a good thing! ** CHAPTER 2 (Robo vs Couder Mask Eyeball testing) This whole SAGA started as a result of testing a sample optic sent to me by Carl Zambuto. When Robo's results were radically different than Carl's results, Robo was considered to be in error. To test this consideration, I made a Couder mask that replicated the Couder mask that Carl used for testing the optic. An image of this mask may be seen here: http://lerch.no-ip.com/atm/Comp/Mask.jpg (36KB) I then took the Optic and the Mask to our ATM lab, and had our most experienced Couder Mask Foucault test operators run the numbers using our 'Old School' test setup. Amazingly the results were almost Identical to Carl's test results done on the opposite coast! The resulting Couder mask numbers (an average between the West coast and East Coast test results) may be viewed here: http://lerch.no-ip.com/atm/Comp/Couder.gif (21KB) For those that don't want to follow the link, the image shows that the optic is essentially perfect with a 0.99 Strehl For comparison, here are the results from Robo (using the 8 sets of KE readings from the most recent "Shades of Gray" experiment) http://lerch.no-ip.com/atm/Comp/FigXp_SOG_Ave.gif (22KB) The above results have a standard deviation (n-1) of 0.0028" and show the optic as strongly under corrected with a Strehl of 0.796. At this point, with such a large deviation in test results Robo was considered in error. ** Chapter 3 (Robo vs RoboC by Dale Easton) In an effort to eliminate Software as the culprit, Dale Easton provided me his version of Robo written in C++. Dale's software uses an entirely different approach than my methods. Dale uses Fixed KE positions and Solves for the resulting zone null radius. He also uses a different method to define the zone null radius than I do (I'm not entirely clear on the method, but that is irrelevant at the moment) After running his version of RoboC on my hardware, I obtained the following test comparison image. http://lerch.no-ip.com/atm/Comp/Dale.jpg (225KB) In the above image, the top graph is Dale's RoboC results and the bottom graph is from one of my SOG tests. Both test results show a strongly under corrected optic, with Dale's code giving a Strehl of 0.736 compared to My Robo result of 0.716. Of interest is the close Strehl reading and nearly identical Surface error profile. However, this doesn't eliminate my Hardware, which could be in error. ** Chapter 4 (Robo Vs Hartmann by James Burrows) This chapter is devoted to hopefully eliminating my hardware as the source of the deviation between classic Couder Mask testing and Robo testing. Towards this end I performed a Hartmann test using a 7 zone Hartmann mask with 10mm wide by 20mm tall openings, each separated by 10mm (excluding the center 20mm radius section of the mirror) The test was performed with a non-lasing laser diode as the source, and a Vesta web-cam CCD as the sensor. While this test was rather difficult to setup accurately (keeping everything aligned, square, and rotationally oriented was difficult) I was able to get two data sets, one inside of ROC and one Outside of Roc. The results of these two tests may be viewed here: http://lerch.no-ip.com/atm/Comp/Hart_Inside.gif (107KB) http://lerch.no-ip.com/atm/Comp/Hart_Outside.gif (107KB) In each of the above images, the top half of the image is the test result showing surface profile of the optic, and its numerical results. The bottom half of each image shows the correlation between the Actual Hartman mask intensity profile (in green) Vs the Simulated profile (in Red) Of interest is that the Correlation is greater for the test conducted inside ROC than for that of Outside Roc. For those that don't want to download the images, the results are as follows: Inside Roc = under corrected, 0.679 Strehl Outside Roc = under corrected, 0.824 Strehl Of interest (for me anyway) is the surface error profile, especially when compared to the Robo SOG experiment results seen here: http://lerch.no-ip.com/atm/Comp/FigXp_SOG_Ave.gif (22KB) My interest in the comparison of the surface error profile is the high correlation between all three plots. All three show a high center, a valley near the 65% radius, a small hill near the 75% radius, another valley near the 85% radius, and finally a turned up edge. I must ask, what are the Odds of this being just a coincidence, especially with the entire lack of correlation when compared to the profile of the classic Couder mask results? ** Chapter 5 (Robo Blink Test) A suggestion was proposed from this list that perhaps aiming for a matched "Shade of Gray" was inappropriate, and a more appropriate simulation against Classic Eyeball Couder mask testing could be obtained by using the "Blink Method". Towards this goal I came up with the following experiment: A) When Robo called a zone null by matching a 'shade of gray' the following process was implemented. B) A search was performed at 0.001" increments at +/- 0.015" distances from where robo called the zone null C) At each search location, the code laterally moved the knife edge in to the return beam until both sides of the fixed zone radius intensities were at 0 D) The code then stepped the KE out of the return beam in 0.000125" increments (the smallest step possible with my hardware). After each step out of the return beam, 30 frames of intensity data were averaged together, and the left and right radius intensities were recorded to a file for post processing. E) The code continued to step the KE out until both sides of the zone radius had exceeded an intensity of 200. F) The code then repeated the process until all 31 longitudinal positions had been recorded (15 locations closer to the mirror, 1 at the pre-determined null position, and another 15 positions further away from the mirror) G) This test took nearly 3 hours to complete! (not including post processing time!) F) Post Processing went as follows. For each longitudinal position, the difference for each Lateral position was calculated (Left zone intensity - Right zone intensity). The difference was entered into a spread sheet, such that a Positive value indicated the Left zone was brighter than the right zone, and a negative value indicated the opposite (Left > right). Once all this data was entered into the spread sheet, the longitudinal location that had the total value of lateral knife edge values closest to zero was considered zone null. My assumption is that this would in essence resemble the "Blink" version of the classic Couder mask test since a full range of lateral position intensity differences were recorded. For those interested, here's the spread sheet that was created: http://lerch.no-ip.com/atm/Comp/Brute_Force.xls (90Kb) The results of this test showed that when Robo called a null using a matching shade of gray, the 'Blink' method showed a identical answer for each zone within 0.001" inch. In other words it didn't disprove the normal Robo code results. (Nothing ventured nothing gained?) ** Chapter 6 (Robo vs Field Tests) Our local ATM group had a Public observing night last Saturday. During this event 6 telescopes that were tested with Robo were present. (we do not have an OTA to test Carl's mirror yet, but we are working on this) During this Public observing night we did some extensive field testing of these 6 telescopes using the Ronchi-Star test and Suiter's defocused Star-Test (with Suiter's book in hand!) Here's a list of the optics tested 14" F/4.5 (Commercial Mirror) 12.5" F/6 11" F/5.8 8" F/8 8" F/6 6" F/5 6" F/4 While an entire book could be (and has been) written on Field testing astronomical telescopes, I shall only give the following comments (since field testing is entirely subjective at this time) "Mirrors that tested near perfect with Robo (0.95 or greater Strehl) had Ronchi Lines with no perceivable curvature inside or outside focus" "Mirrors with lower Strehls showed curvature in the Ronchi lines, and that curvature matched the Over/Under corrected predictions by Robo" "Of all the mirrors, two were 'Critically' Star tested, (the 6" F/5 and the 14.5" F/4.5). The 6" showed near perfect correction and the 14.5" showed over corrected, both results concur with Robo, Suiter's illustrations, and the predicted defocused structure given by FigureXP's star test sim." Now, does any of the above mean anything, honestly NO! We went into this field testing with the fore knowledge of 'what to look for' on each mirror. It is highly possible that this fore knowledge led us astray! ** Chapter 7 (Summation) #1 Classic Couder Mask testing (using identical Couder Masks), conducted by skilled Foucault test operators (whose results are nearly identical) show the Zambuto Optic as nearly perfect (0.98 Strehl) #2 Two versions of Robo-Foucault software, and Jim Burrow's Hartmann test, show the optic as under corrected with a Strehl somewhere between 0.7 and 0.8. #3 Field testing of optics previously tested by Robo-Foucault seem to agree with Robo-Foucault's predictions The above presents a conflict between the classic method of ATM optical testing using a Couder mask (which has Ten's of decades of history supporting it's results) and more recent testing methods with only a few years of history supporting their results. The question is which method is Accurate? To hopefully solve this dilemma, Carl's optic is currently on its way to Bob Royce of R.F. Royce optical, for testing with Mr. Royce's Double Pass AutoCollimation Ronchi Test. As I understand this test, it will tell us if the Optic is perfect or not. (Humor mode on!) Anyone wanna place some bets? (Humor mode off!) Well I guess I've wasted enough bandwidth for the moment. All that's left to do is anxiously sit and wait for the test results from Mr. Royce! Take Care, James Lerch http://lerch.no-ip.com/atm (My telescope construction,testing, and coating site) --- BBBS/NT v4.01 Flag-4* Origin: Email Gate (1:379/100) SEEN-BY: 633/267 270 @PATH: 379/100 1 106/1 2000 633/267 |
|
| SOURCE: echomail via fidonet.ozzmosis.com | |
Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.