| TIP: Click on subject to list as thread! | ANSI |
| echo: | |
|---|---|
| to: | |
| from: | |
| date: | |
| subject: | A question |
NH> It seems to me that the ideal would be some sort of a hash NH> involving a second array, as long in bits as there are integer NH> numbers. Each integer would set a bit. If that bit had been NH> previously set, bingo, a duplicate. Something else just dawned on me. After all the values (a few thousand at most) are inserted into the 4+ gigabyte array (with each value being represented by just 1 bit in the array) each bit in the array must be examined. That makes any previously supposed advantage evaporate (and turn into a big-time deficit). JB>if you sort the array and then parse it for dupes JB>you can do that in O(n)... I've never heard of a sorting algorithm working in O(n) time. JB) ... time using a radix sort. JB>and using hopefully mucn less memory (need space for 4 times the JB>size of the array) JB>qsort O(n.log(n)) could be faster than radix sort for small arrays. Define small? What is advantageous about radix sort for large arrays? Does large equal everything that's not small? þ CMPQwk 1.42 999 --- Maximus/2 3.01* Origin: COMM Port OS/2 juge.com 204.89.247.1 (281) 980-9671 (1:106/2000) SEEN-BY: 633/267 270 @PATH: 106/2000 633/267 |
|
| SOURCE: echomail via fidonet.ozzmosis.com | |
Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.