| TIP: Click on subject to list as thread! | ANSI |
| echo: | |
|---|---|
| to: | |
| from: | |
| date: | |
| subject: | A question |
PS>What I'm saying is that sometimes an algorithm that looks PS>faster on paper turns out to suck in practise. That makes me think that in the end, it's really necessary to have some logic at the start of the program to determine how big the array is going to be and which method to use in the end. PS>> The most primitive algorithm for this problem PS>> (compare each with each) would be O(n^2). NH> Yipes, I hope it doesn't come to that. PS>That one is very memory efficient, though, and good enough PS>for smaller input sets. Think about using your approach for PS>64 bit integers, how much memory would your hash table PS>consume? I tried my original method last night. I didn't get very far, considering that my real + virtual memories total less than the 4+ gigs required. How does this sound: do a QuickSort on the array (after it's been determined to all fit easily into memory). QuickSort does an in- place sort. The Bit-O QuickSort is n log n. Then go once through the array eliminating the dupes. The question then becomes: how does one eliminate the holes once all the duplicates are eliminated> þ CMPQwk 1.42 999 --- Maximus/2 3.01* Origin: COMM Port OS/2 juge.com 204.89.247.1 (281) 980-9671 (1:106/2000) SEEN-BY: 633/267 270 @PATH: 106/2000 633/267 |
|
| SOURCE: echomail via fidonet.ozzmosis.com | |
Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.