Difference between revisions of "User talk:Rednaxela/kD-Tree"
(More feature requests...) |
m (→Translating... reimplementing... etc: clarify detail) |
||
(106 intermediate revisions by 11 users not shown) | |||
Line 22: | Line 22: | ||
:: Okay, that makes sense. I have some other feature requests: A size() function to keep track of how many entries there are in the tree, and it'd be cool if there was something to make sure the size stays below a certain point. A possible problem might be the line with ''tree.weights = weights;'', because it doesn't reset the weights afterwards. Also, I'd selfishly like the addPoint functions and such to be nonstatic and without the KdTree<T> tree parameter (but I suppose that is a matter of taste). :P --[[User:Positive|Positive]] 01:03, 28 August 2009 (UTC) | :: Okay, that makes sense. I have some other feature requests: A size() function to keep track of how many entries there are in the tree, and it'd be cool if there was something to make sure the size stays below a certain point. A possible problem might be the line with ''tree.weights = weights;'', because it doesn't reset the weights afterwards. Also, I'd selfishly like the addPoint functions and such to be nonstatic and without the KdTree<T> tree parameter (but I suppose that is a matter of taste). :P --[[User:Positive|Positive]] 01:03, 28 August 2009 (UTC) | ||
+ | |||
+ | :: Alright, I gave that optimization of calculating the node distance as-needed from the splits a try. The result is that: 1) If replace the existing calculating of the distance to the tight boundary, the performance is worse, and 2) If I just use it to shortcut past the full distance to tight boundary calculation, the performance is about the same. This indicates to me that 1) Calculating the exact tight-boundary disance for nodes eliminates a rather significant number of node visits, and 2) The overhead of tracking the data for split-based node distance is large enough that it's not worth it to use it as a shortcut. --[[User:Rednaxela|Rednaxela]] 06:23, 1 September 2009 (UTC) | ||
+ | |||
+ | ::: Keeping the weights across calls I considered desired, but it was a bad API for it. It's now split into a setWeights() call. Haha, I don't know what I was thinking when I made the addPoint and nearestNeighbours functions static, I was in a strange mindset when I first wrote this beast. They're now non-static and tested to have no impact on performance. I'm not sure what you mean by "make sure size stays below a certain point". Do you mean removing old points? --[[User:Rednaxela|Rednaxela]] 03:39, 28 August 2009 (UTC) | ||
+ | |||
+ | :::: Great. :) Yes, that's exactly what I meant. It seems usefull and safe to have some kind of deletePoint function and perhaps a built-in linked list system to remove old entries, so that the data stays up to date & won't eventually fill up the memory. --[[User:Positive|Positive]] 03:48, 28 August 2009 (UTC) | ||
+ | |||
+ | :::: Well... support for an optional limit on size is now supported. Deleting arbitrary points however, is not supported, because that would conflict with the size limit since it would be far too painfully slow to remove that point from the linked list that tracks the the order that points were added in. Also, to make sure this doesn't impact speed normally, that list is never even created if no limit was specified. Anyways, the size of the code is getting closer to 500 lines than I'd like it to, so I think this is enough non-optimization features for any sane usage of it. --[[User:Rednaxela|Rednaxela]] 04:45, 28 August 2009 (UTC) | ||
+ | |||
+ | I don't know if this is standard or not, but it seems strange that the constructors isn't the very first method in the class. And the method/sub-class ordering is more like... DrussGT's =) They are now ordering by the time you added the code and where, aren't they? I know it isn't going to improve the execution speed, but... » <span style="font-size:0.9em;color:darkgreen;">[[User:Nat|Nat]] | [[User_talk:Nat|Talk]]</span> » 14:40, 28 August 2009 (UTC) | ||
+ | |||
+ | : Nah, not ordered by the time added at all. At one point I had the functions for 'bounds' grouped beside the variables, but they kind of drifted apart. There wasn't any real reason for the ordering except that nearby things often had some relevance to eachother. Reformatted that a bit now as to be saner and maintain that relevance-ordering :) --[[User:Rednaxela|Rednaxela]] 14:56, 28 August 2009 (UTC) | ||
+ | |||
+ | I've been testing with the pre-limit version of your tree in Portia, and it seems to be working. :) Now with your new version, I do get an error at line ''double[] location = this.locationStack.pop();'': (''The method pop() is undefined for the type LinkedList<double[]>''). --[[User:Positive|Positive]] 15:53, 28 August 2009 (UTC) | ||
+ | |||
+ | : Ugh... Any reason why you're on ancient Java 5? Anyway, that method in LinkedList only exists in 6 not 5. Making it Java 5 compatible again... --[[User:Rednaxela|Rednaxela]] 16:11, 28 August 2009 (UTC) | ||
+ | |||
+ | :: Haha, actually I'm on Java 6. I compile everything using the Java 5 library though, so I know for sure I don't make code that's not Java 5 compatible. :) --[[User:Positive|Positive]] 16:14, 28 August 2009 (UTC) | ||
+ | |||
+ | I didn't exactly had time to look at this, but just out of curiosity, does this tree return the scans sorted by distance? By the way, congratulations on your optimization work.--[[User:Navajo|Navajo]] 00:43, 1 September 2009 (UTC) | ||
+ | |||
+ | : It returns the entries in [[wikipedia:Binary_heap#Heap_implementation|max-heap array ordering]], since that is the most efficent way to construct the list of nearest neighbours. Maintaining a list of "lowest n values" doesn't actually require the list to be fully ordered, it only requires enough ordering that it's easy to throw out the largest value when over the size limit, making a max-heap the perfect structure for the task. It would be fairly simple and fast to convert this to squential order for final output, but I saw no reason to bother with it since I've never seen a DC gun/movement care about the order of the outputs, only the distances associated with them. --[[User:Rednaxela|Rednaxela]] 01:25, 1 September 2009 (UTC) | ||
+ | |||
+ | : My DC Gun does care about the order of the output. It computes pif angles discarding the ones that end outside the battlefield, so I need to know that I end up with the closest entries possible. --[[User:ABC|ABC]] 10:15, 1 September 2009 (UTC) | ||
+ | |||
+ | :: Err... I'm not sure I understand... Any sane DC-PIF gun would be discarding result that end outside the battlefield like you say, but it's not as if the order of output affects which ones end up inside/outside the battlefield. The outputted values are ensured the closest entries possible ones, just not in sequential order. I don't see how any gun could sanely care about the output order, but if so, which order would you prefer, ascending or descending? --[[User:Rednaxela|Rednaxela]] 12:39, 1 September 2009 (UTC) | ||
+ | |||
+ | ::: I'm speculating, but maybe he builds a cluster of size M and then tries to keep the best N<M that stay inside the field. But I think that a sorted cluster should be optional as it may impact the performance for most of the tree users with no gain. --[[User:Zyx|zyx]] 13:24, 1 September 2009 (UTC) | ||
+ | |||
+ | ::: Yes, that's exactly what I do. If it's too much trouble I can always sort them myself. --[[User:ABC|ABC]] 14:09, 1 September 2009 (UTC) | ||
+ | |||
+ | :::: Nah, it's not too much trouble, and as I explain to Nat below, it's quicker for me to get sorted output from the heap than it is for you to sort the data after. I already have this implemented on my computer at home, all I have to do is test how it impacts performance, and if it's measurable, make it optional. --[[User:Rednaxela|Rednaxela]] 14:52, 1 September 2009 (UTC) | ||
+ | |||
+ | :::: Hey ABC, unlike what you said... Simonton's tree may not always output in descending order always! Reading the code for Simonton's code indicates that it uses a PriorityQueue, and the order it outputs is the same as the iterator of PriorityQueue. If you read the java docs, it very clearly states "<code>The Iterator provided in method iterator() is not guaranteed to traverse the elements of the PriorityQueue in any particular order. If you need ordered traversal, consider using Arrays.sort(pq.toArray()).</code>", therefore if Simonton's tree is currently returning in descending order, then it just happens that your JVM does that. This means that the behaviour of versions of Shadow using Simonton's tree are VM-dependant. But after I make the sorting supported in my tree when I get home, Shadow using my new tree will be able to be sure it's performance is not dependant on the JVM in question. --[[User:Rednaxela|Rednaxela]] 15:21, 1 September 2009 (UTC) | ||
+ | |||
+ | ::::: Thanks for that info! You are a true Java Guru. At the time I just blindly plugged Simonton's tree instead of my brute force method, noticed the entries were "backwards" and adjusted my code accordingly... --[[User:ABC|ABC]] 15:53, 1 September 2009 (UTC) | ||
+ | |||
+ | ::: I don't understand, how the order affect the result of PIF algorithm? On the other hand, returning the sorted data would take O(n log n) time more due the heap, and that definitely effect the bot that doesn't care about the ordering. If we sort them ourselves, it would still take O(n log n). I see no point to add the ordering to the tree, unless, of course, it won't effect the speed. » <span style="font-size:0.9em;color:darkgreen;">[[User:Nat|Nat]] | [[User_talk:Nat|Talk]]</span> » 14:23, 1 September 2009 (UTC) | ||
+ | |||
+ | :::: I want to find the 10 scans closest to the current one that end inside the battlefield. I ask the tree for the 20 closest scans, and then calculate the path for each of them, starting from the closest one, until I get 10 valid ones. --[[User:ABC|ABC]] 14:39, 1 September 2009 (UTC) | ||
+ | |||
+ | :::: (edit conflict, crossing out redundant things) <del>It was like Zyx said. The numbers are made up but it's like this: Shadow gets the 50 closest entries from the kD-Tree, it then runs PIF on each entry starting with the one with the last distance, '''and''' it stops processing once it has the PIF results of 10 that didn't run into walls. Again, 50 and 10 are made up numbers for sake of example.</del> Also Nat, you're very wrong about the speed there. ''Firstly'', the time taken by reordering the n final results, is MUCH quicker than the nearest-n-neighbours search itself, because O(n log n) only tells you how the speed is multiplied by changing the number of values sorted, it doesn't tell you the base speed, which happens to be far quicker than the base speed of the nearest-n-neighbours algorithm. ''Secondly'', I can make the output results sorted in notably LESS than O(n log n) time. Converting max-heap ordering to sequential ordering, only requires the second part of a [[wikipedia:Heapsort|heapsort]], rather than a full run of a sorting algorithm. --[[User:Rednaxela|Rednaxela]] 14:52, 1 September 2009 (UTC) | ||
+ | |||
+ | ::::: I just checked and your made up numbers (50/10) are exactly what I currently use in Shadow's melee gun. :) --[[User:ABC|ABC]] 15:53, 1 September 2009 (UTC) | ||
+ | |||
+ | :::::: Is the tree really fast enough to let you query 50 neighbours for all opponents? » <span style="font-size:0.9em;color:darkgreen;">[[User:Nat|Nat]] | [[User_talk:Nat|Talk]]</span> » 16:01, 1 September 2009 (UTC) | ||
+ | |||
+ | :::::: Oh wow, my guessing skills are quite good it seems, haha. As far as speed of that goes Nat, I'm pretty sure that in most DC-PIF guns, the PIF takes considerably longer than the DC. --[[User:Rednaxela|Rednaxela]] 16:08, 1 September 2009 (UTC) | ||
+ | |||
+ | ::::: Merge sort, which is default for Arrays.sort() is twice as fast as the heap sort according to my book on algorithms, which has the benchmarking time printed. And the second part of heap sort is still O(n log n), you still need to do down-heapify for ''n'' times. Unless there are faster ways to do it. » <span style="font-size:0.9em;color:darkgreen;">[[User:Nat|Nat]] | [[User_talk:Nat|Talk]]</span> » 15:13, 1 September 2009 (UTC) | ||
+ | |||
+ | :::::: True, the second part of heap sort still scales at a rate of O(n log n) but should be considerably faster than a full heapsort. I may benchmark the difference later, but I'm extremely doubtful that Arrays.sort() could be faster for data that is ''already'' in heap ordering. Plus Nat, one '''very''' important consideration that you leave out, is that in order to use Arrays.sort(), unless you're sorting a simple array of numbers, your input values need to be encapsulated in objects which implement Comparable, and comparisons of Comparable are well over 5x slower than direct numerical comparison due to function call overhead. Avoiding overhead such as that is part of why my tree is just so damn fast ;) --[[User:Rednaxela|Rednaxela]] 15:31, 1 September 2009 (UTC) | ||
+ | |||
+ | :::::: Oh I see. Completely forget function overhead. Thanks. » <span style="font-size:0.9em;color:darkgreen;">[[User:Nat|Nat]] | [[User_talk:Nat|Talk]]</span> » 15:59, 1 September 2009 (UTC) | ||
+ | |||
+ | I'm not sure where in this page I should post this since there are so many topics here, but anyway, while trying to improve Gibbs I found two situations in which your tree throws exceptions. First is when searching for 0 neighbors. This one I easily solved by adding a simple <code>if(count < 1) return new ArrayList<Entry<T>>(0);</code> at the beginning of the nearestNeighbor method. The second problem is when you limit the tree size to something smaller than bucketSize. When the method removeOld() is called it throws a NullPointerEception when checking the condition of the following loop | ||
+ | <code><syntaxhighlight> | ||
+ | do { | ||
+ | cursor.locationCount--; | ||
+ | cursor = cursor.parent; | ||
+ | } while (cursor.parent != null); | ||
+ | </syntaxhighlight></code> | ||
+ | which is easily solved by changing it to | ||
+ | <code><syntaxhighlight> | ||
+ | while (cursor != null){ | ||
+ | cursor.locationCount--; | ||
+ | cursor = cursor.parent; | ||
+ | } | ||
+ | </syntaxhighlight></code> | ||
+ | I know these problems are quite specific and easy to solve, but I just felt like I should report them. Also, if it is not much of a trouble, could you please add an option to remove the closest scan instead of the oldest? --[[User:Navajo|Navajo]] 01:15, 13 April 2010 (UTC) | ||
+ | |||
+ | Well, the version currently posted on this page I kind of consider deprecated in favor of my [http://bitbucket.org/rednaxela/knn-benchmark/src/tip/ags/utils/dataStructures/trees/thirdGenKD/ new version]. The new version is slightly faster, is more flexibly coded, happens to behave right for the zero-sized search, and adds an additional feature of providing an iterator for allowing incremental search (but still best to specify an explicit max search size for performance reasons). On the other hand, the new tree doesn't yet have this support for removing old/close data. I could add this to it shortly if there is demand though. Would you want to use that Navajo? :) --[[User:Rednaxela|Rednaxela]] 01:59, 13 April 2010 (UTC) | ||
+ | |||
+ | I'm planning to test the impact of removing the closest instead of the oldest scan, but I can add this feature to the tree myself if it is too much of a trouble to you. --[[User:Navajo|Navajo]] 03:12, 13 April 2010 (UTC) | ||
+ | |||
+ | == Warning! Rounding Errors! == | ||
+ | |||
+ | Argh! It seems rounding errors are evil! Evil evil evil! I was toying with my tree in a very lightly segmented gun where exact-same-locations will occur frequently and it did that stupid rampent/infinite branching again. The cause? It turns out to be rounding, indicated by this excerpt from some debugging messages. | ||
+ | <pre> | ||
+ | ... | ||
+ | Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 | ||
+ | Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 | ||
+ | Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 | ||
+ | Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 | ||
+ | Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 | ||
+ | Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 | ||
+ | ... | ||
+ | </pre> | ||
+ | When the only values in the dimension were 1.0, and just ''BARELY'' below it, the splitValue would be set to the average of those two values. Due to rounding errors, the average happened to be the same as the higher value... causing the tree to lump all values into the left child node and try branching again, and again, and again :( | ||
+ | |||
+ | So as a warning to those currently using the tree, this obscure sitaution could cause the tree to lock up... I'll release a fix soon --[[User:Rednaxela|Rednaxela]] 00:54, 30 August 2009 (UTC) | ||
+ | |||
+ | : [[User:Simonton|Simonton]]'s fix to this was to only allow his tree a certain recursive depth (he used 500), after which it would start throwing away values. This way it was also (sort of) possible to have a tree with 'rolling averages' by setting the max recursive depth very low. However, it's more of a bandaid then a fix for the root of the actual problem. --[[User:Skilgannon|Skilgannon]] 13:03, 30 August 2009 (UTC) | ||
+ | |||
+ | :: No, he didn't use 500. ''We'' use 500 ourselves. » <span style="font-size:0.9em;color:darkgreen;">[[User:Nat|Nat]] | [[User_talk:Nat|Talk]]</span> » 13:29, 30 August 2009 (UTC) | ||
+ | |||
+ | :: For this particular issue, that would indeed be a very very poor bandaid, and anyways I have a proper fix in place now. As for having a limit on tree size, I very strongly prefer the limit on the number of entries I have implemented. While it won't put a hard limit on tree depth or nodes it will generally to keep it within reasonable bounds and will always toss out old data in the exact order it was entered in. --[[User:Rednaxela|Rednaxela]] 16:07, 30 August 2009 (UTC) | ||
+ | |||
+ | Well, this sucks, I thought I could fix it simply by making a > into a >= because I thought it always would round one way.... turns out it doesn't... it rounds whatever way it's in the mood for it seems.. bah. Oh and by the way ABC, curse Shadow for hitting such almost-1.0 velocitities as to cause rounding issues :P --[[User:Rednaxela|Rednaxela]] 02:22, 30 August 2009 (UTC) | ||
+ | |||
+ | I'm sure there are lots of ways to deal with this, but just a comment on how my insert method handles a similar situation. Values less than the split go to the left, values greater go to the right, and values equal to the split go to alternating sides. One other implication is that when removing a node, you have to check both sides when the value is equal to the split value, but that's easy enough. --[[User:Voidious|Voidious]] 02:33, 30 August 2009 (UTC) | ||
+ | |||
+ | : I guess if you had 19 0.9999's and one 1.0 and it used 1.0 as the split, you'd still have this problem unless the first alternating side also alternated. So I suppose a different special case would probably work a lot better. Another thing that sucks about this, at least if you use the tree the way I do... Rounding every number to, say, 10 digits seems pretty reasonable, but I wouldn't want to do that either, because I count on the fact that the neighbors returned from the tree are the same exact objects I inserted. :-/ --[[User:Voidious|Voidious]] 02:43, 30 August 2009 (UTC) | ||
+ | |||
+ | (edit conflict) I saw that alternating approach, but I really don't like that approach for several reasons. It increases the balance of the tree very slightly, but it comes at the cost of making each node wider than they need to be. Not only deletes have to search more nodes Voidious, but seraches as well. For example, presume a dimension only has values of 0.0, 0.5, and 1.0 and the first split in the tree is at 0.5. Then the full search has to be repeated completely on both sides, and thus could have a huge impact if such a split happens to be the first split in the whole tree. ''Anyways'', I now have a fix, the solution is: If split value is equal to the maximum node value (will only occur if they're so close that no Double value between could exist), then it sets the split value to the minimum node value instead which guarantees some values will be put in each node. Extensive testing is showing this is working quite nicely :) --[[User:Rednaxela|Rednaxela]] 02:48, 30 August 2009 (UTC) | ||
+ | |||
+ | : Ah, cool. I agree it can unbalance the tree, but I don't think it makes searches any slower (otherwise). Mine definitely doesn't account for the equal-to-split edge case in its "findLeaf" - I recall thinking it would need to account for that, then realizing it didn't. | ||
+ | : When coming back up the tree, you test if the other side of the tree could have a node with a distance to the search point that is lower than some threshold. Since the value could be infinitely close to the split value, you have to just assume it is the split value, right? So while you may take a wrong turn while descending the tree if you hit values equal to the split value, you'll check the other side on the way back up, which you're just as likely to have had to do anyway. In your example, you'd need to search the other side anyway - know what I mean? It's been a while since I wrote my tree, maybe I'm misunderstanding something... =) But I'm pretty sure it's just as accurate and just as fast for searches. | ||
+ | : Deletes need to find an actual value that's already in the tree, though, which is a different scenario. (And I don't particularly care how fast deletes are.) --[[User:Voidious|Voidious]] 03:11, 30 August 2009 (UTC) | ||
+ | :: Which reminds me, I actually wrote this case for another scenario, which might be a problem in your current solution, too: if all the nodes have the same value for the split dimension. --[[User:Voidious|Voidious]] 03:19, 30 August 2009 (UTC) | ||
+ | ::: I presume you mean all entries, not all nodes. In that case yes, I ''already'' deal with that case. I always set the split dimension to the widest dimension, and whenever "widest" is still 0 width, then it will simply double the bucket size for that particular node. My logic is that the scenario you describe could only happen with "split on widest dimension" when that all entries are in ''exactly'' the same spot, in which case there could never ever be any performance gain by putting them in seperate nodes. On a related note, one possible optimization I could also do with this, would be giving such nodes a special marking and avoiding redundent distance calculations in them. I haven't bothered with that yet though since I doubt it would occur often in practice with a well-segmented gun. --[[User:Rednaxela|Rednaxela]] 04:51, 30 August 2009 (UTC) | ||
+ | :: Actually Voidious, in my tree checking the other side anyway wouldn't happen. I keep track of an extra-tight bounding box for each node and compare to that, and due to how my splitting doesn't alternate, the tight bounding box will never overlap on the split value actually. So.. you're right about some trees, but not my one :) --[[User:Rednaxela|Rednaxela]] 04:51, 30 August 2009 (UTC) | ||
+ | |||
+ | == Some major changes == | ||
+ | |||
+ | Man, this tree keeps taking away from time to finish my melee gun, but anyways: I've done a major update now which gives the tree a bit nicer and more flexible of an API. In particular, you now construct it with things such as <code>new KdTree.SqrElucid<T>(dimensions, sizeLimit)</code> or <code>new KdTree.WeightedManhattan<T>(dimensions, sizeLimit)</code> instead of <code>new KdTree<T>(dimensions, sizeLimit)</code> and modifying the distance function in your copy of the code (I'm looking at you [[User:Skilgannon|Skilgannon]] :)). This also makes it easy to use the unweighted version which is faster by 5-10%. One interesting note I found, is that while Manhattan distance is less complicated to calculate, it makes it harder for the tree to eliminate branches and thus is twice as slow (Looking at you again [[User:Skilgannon|Skilgannon]], considering your recent concerns with DrussGT's speed). What do people think? I'm also considering making a stripped down 'lite' version with no weighting, only sqrElucid distance, some of the more agressive and less significant optimizations removed, no size limiting, and be nearly as fast. --[[User:Rednaxela|Rednaxela]] 05:52, 2 September 2009 (UTC) | ||
+ | |||
+ | Hmm, so using Manhattan approximately doubles the number of branches that must be recursed? Is this dimension-dependent, ie., would the fact that I'm using 11 dimensions make this a factor of more than 2? I'd gladly switch to using Euclidean distance to get this speed increase, but from what I've found, my gun scores significantly better using Manhattan distance. The difference between the two is about 0.5 worth of [[TCRM]] score. I'm guessing that how my dimensions are defined somehow favors the use of Manhattan over Euclidean, but I don't currently have the time to tweak my segments into a form that works well for Euclidean instead =) Another point, for bots that only use one weighting scheme throughout the match, isn't it quicker to multiply the point by the weighting before storing it, and then afterwards treat the dimensions as unweighted? This is what I've been doing in DrussGT since way back when... it also means that I can apply non-linear weightings without any performance hit at runtime. --[[User:Skilgannon|Skilgannon]] 08:50, 2 September 2009 (UTC) | ||
+ | |||
+ | : I didn't run an exact count of the number of branches that must be recursed, but since the time taken is doubled, I feel pretty sure the number of branches recursed is at least doubled. It being possibly dimension-dependant might indeed be the case but I don't think 11 dimensions would make it more than my test showed, since my test was with the 13-dimension data recorded from Diamond. About weighting: yes, of course it's quicker to multiply the point by weighting it before storing it. The weighted versions of the tree are only intended for bots which do dynamic weighting, like Diamond does (melee weighting vs 1v1 weighting). Static weighting is considered to be something that bots are responsible for doing when they create their dimensions in the first place. --[[User:Rednaxela|Rednaxela]] 12:42, 2 September 2009 (UTC) | ||
+ | |||
+ | I haven't tested it yet, but I would like to say, fantastic job! This definitely gives "everyone what he/she wants". I only wonder, don't abstract classes make the tree that much slower (because of the overhead in having to decide which distance() function to use for example)? I'm not a java expert, so it might be that that is optimized at runtime. --[[User:Positive|Positive]] 14:29, 2 September 2009 (UTC) | ||
+ | |||
+ | : Thanks! I was unsure about that myself at first, but I tried it, gave a it a good performance test, and it performed exactly the same it seemed, at least in Java 6 (I don't know for certain but old enough VMs I bet wouldn't perform equally with them). In fact, the slight tweak of having the distance functions read 'weights' from the class instead of passing it in as an argument seemed to cause this revision to have a very slight improvement in performance. --[[User:Rednaxela|Rednaxela]] 14:44, 2 September 2009 (UTC) | ||
+ | |||
+ | :: Great, you've really done your homework. :) I just looked over the code, and one little thingie: It seems to me you could use a regular "int" for sizelimit instead of Integer. (You can check for sizeLimit==0 in instead of sizeLimit==null). Might give a small improvement. :P --[[User:Positive|Positive]] 14:58, 2 September 2009 (UTC) | ||
+ | |||
+ | ::: Hmm, true I could do that. That wouldn't have much of a performance impact though I'd think, considering how <code>== 0</code> and <code>== null</code> would be the same speed and the numerical comparison when actually checking the value only happens once per insertion call. I might make the change anyway though, considering the way I'm not really fond of the wrapper classes like Integer to begin with. --[[User:Rednaxela|Rednaxela]] 15:09, 2 September 2009 (UTC) | ||
+ | |||
+ | I know I shouldn't burden you with any more feature requests, but I'd really like the following extension to your tree (because it is a lot less elegant to do outside of the class): As an addition to ''nearestNeighbor'', I'd like to give an argument like ''(double)acceptedLowest[],(double)acceptedHighest[]'', that prevents the tree from returning any matches with a dimension value outside of the specified range. (That feature would be awesome in a new sologun I'm trying to make for Portia!) --[[User:Positive|Positive]] 19:45, 2 September 2009 (UTC) | ||
+ | |||
+ | : Hmm... well... that would be easy enough to add, except for one issue: The proper/efficient way to implement it would require a '''third''' distance function: Longest distance from a point to the inside of a region. I think the subclasses that implement the distance calculations are rather bloated/repetitive as it is really, without having to add that as well. So yeah, feel free to modify it for your usage, but adding that particular feature to the main version just adds too much bulk I think. --[[User:Rednaxela|Rednaxela]] 00:24, 3 September 2009 (UTC) | ||
+ | |||
+ | :: Well... I gave it a good try but couldn't work the bugs out of it, nor were perliminary performance indications looking promising. --[[User:Rednaxela|Rednaxela]] 15:37, 5 September 2009 (UTC) | ||
+ | |||
+ | == Spelling == | ||
+ | |||
+ | Isn't 'Elucidian' supposed to be 'Euclidean'? » <span style="font-size:0.9em;color:darkgreen;">[[User:Nat|Nat]] | [[User_talk:Nat|Talk]]</span> » 14:46, 5 September 2009 (UTC) | ||
+ | |||
+ | : Yep, you're right :) --[[User:Rednaxela|Rednaxela]] 15:37, 5 September 2009 (UTC) | ||
+ | |||
+ | == help with the Java == | ||
+ | |||
+ | Hi Rednaxela, I'm interested in learning your kdTree and playing with the possible applications. I was wondering if you or someOne might point me in the direction of an open source bot that uses your Tree... robocode is my only experience programing , a proper example of it implemented would greatly help and make my learning curve alot more fun. Thx --[[User:Jlm0924|Jlm0924]] 20:04, 17 September 2009 (UTC) | ||
+ | |||
+ | [[User:Skilgannon|Skilgannon]] uses it in [[DrussGT]], which is open source. --[[User:Voidious|Voidious]] 22:06, 17 September 2009 (UTC) | ||
+ | |||
+ | : Oh, and [[RougeDC]] (willow) does too, of course. =) --[[User:Voidious|Voidious]] 22:26, 17 September 2009 (UTC) | ||
+ | |||
+ | : I wouldn't recomend RougeDC willow as a reference for usage of this kD-Tree, as it still used an old (and subtly buggy) version. [[DrussGT]] would be a good example indeed. When I get around to releasing [[Glacier]] that will also be an example. --[[User:Rednaxela|Rednaxela]] 23:02, 17 September 2009 (UTC) | ||
+ | |||
+ | : Yeah, take a look at [[DrussGT]]. I use it in two different places, both the gun and the movement. In the gun it is used to find similar situations for finding an angle to shoot at. In the movement it is used to guess the bullet power the enemy will use to fire. Once you've extracted the source look at jk/mega/dgun/DrussGunDC.java and search for 'heapTree'. It should be fairly obvious where I create the tree, where I'm adding the data to the tree and where I retrieve the data from the tree based on how close it is to the current scan. Perhaps it would even be good to write a simple gun showing how the various trees are used, as a reference. --[[User:Skilgannon|Skilgannon]] 09:08, 18 September 2009 (UTC) | ||
+ | |||
+ | : I think having a simple reference implementation is a really good idea. Though if by "various trees" you mean including those besides Rednaxela's, right now I'm not sure why anyone would choose anything else. =) I'm already debating optimizing mine or just switching to his. --[[User:Voidious|Voidious]] 13:02, 18 September 2009 (UTC) | ||
+ | |||
+ | : I'm taking a look now... a couple of reference implementations in a simple bot would be a gold mine though ! DrussGt is overWhelming to me :) I get some sleep and take a better look :) Thx guys --[[User:Jlm0924|Jlm0924]] 16:00, 18 September 2009 (UTC) | ||
+ | |||
+ | == sequentialSorting == | ||
+ | |||
+ | Hey Rednaxela, still an awesome tree, but perhaps you could add a little note to the tree about how the sequentialSorting works... I only just found out it sorts ''highest'' distance first. :P --[[User:Positive|Positive]] 10:36, 24 September 2009 (UTC) | ||
+ | |||
+ | Yeah, it might be good to make a note of that somewhere. It was done that way to both 1) Be consistant with how ABC (falsely) throught Simonton's one was working, and 2) That's the order that values can naturally be extracted from the heap) | ||
+ | |||
+ | : Okay, got it, no problem. I've done a little alteration to your code to turn the order around, all fixed. :) --[[User:Positive|Positive]] 16:34, 24 September 2009 (UTC) | ||
+ | |||
+ | == Range Search == | ||
+ | |||
+ | Can I ask that you add this as well. This is part of some standard KD-Tree implementations in other languages. You can probably read over how it works in my tree, but adding support for it shouldn't cause problems. It can be used in say, older style pattern matchers for getting matches to the current enemy state within a hyper-rectangular bounds (say in this case velocity and heading delta). --[[User:Chase-san|Chase]] 02:49, 3 March 2010 (UTC) | ||
+ | |||
+ | Sure, wouldn't be hard to add in my rewrite. It might be a while till I get around to that though, since I'm really anxious to try my (possibly completely novel?) ideas for making it faster and such. --[[User:Rednaxela|Rednaxela]] 03:27, 3 March 2010 (UTC) | ||
+ | :Actually I had tested some of those on my tree the other day while trying to optimize it, the "choosing a better dimension if the current one is only one value" actually produced decent gains assuming you chose a good dimension. In my first one I chose the one with the biggest difference, but I didn't like it, even though it worked better than my "iterate through dimensions in order till you find one that has more then one value", my guess is the more values a dimension has, the better (but that is hard to track). None of the data I posted is from those, but it got around 0.3 to the listed maps 0.4. --[[User:Chase-san|Chase]] 03:55, 3 March 2010 (UTC) | ||
+ | :: Are you talking about the dimension to split on? That's very different than the things I have in my plans section. As far as what dimension to split on and where to split in it, I already do "middle of the dimension with the widest variance" (which just today I noticed some research papers conjecturing that it is perhaps the optimal kd-tree splitting method). --[[User:Rednaxela|Rednaxela]] 04:06, 3 March 2010 (UTC) | ||
+ | ::: Oh, alright, thats what you meant when you say that. Hehe, I guess I am more than a bit out of date. --[[User:Chase-san|Chase]] 04:40, 3 March 2010 (UTC) | ||
+ | :::: Yeah, what I meant with the "dimension-pruned" things, is that the calculation of distance between the search point and each point in a bucket, can be made to not repeat the part of the summation for dimensions where the node has a width of 0, instead calculating that part of the summation once for every point in the bucket. --[[User:Rednaxela|Rednaxela]] 05:12, 3 March 2010 (UTC) | ||
+ | |||
+ | == Rewrite progress == | ||
+ | |||
+ | Well, I have my rewrite largely done now. So, to those who thought it would be neat to have a search iterator... I now have it! At no significant performance penalty in fact, though it does operate best when you give it a a 'max iteration length', but if you stop before that, you still gain some time be it never descends into nodes until they have a possibility be required for next(). The reason it's still good to provide a 'max iteration length' is so that can routinely prune it's list of points it has evaluated. In order to make this possible efficiently, I coded up a "Interval Heap" double-ended queue that is very fast. Results look encouraging so far, however I didn't see the gain I hopes to see from it's more flexible path selection code. See the following results: | ||
+ | <pre>RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (Unsorted Output) >> | ||
+ | : Average searching time = 0.059 miliseconds | ||
+ | : Average worst searching time = 0.899 miliseconds | ||
+ | : Average adding time = 7.15 microseconds | ||
+ | : Accuracy = 100% | ||
+ | |||
+ | RESULT << k-nearest neighbours search with Red's "Next Gen" kd-tree (Sorted Output) >> | ||
+ | : Average searching time = 0.061 miliseconds | ||
+ | : Average worst searching time = 0.936 miliseconds | ||
+ | : Average adding time = 7.47 microseconds | ||
+ | : Accuracy = 100% | ||
+ | |||
+ | RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (Sorted Output) >> | ||
+ | : Average searching time = 0.061 miliseconds | ||
+ | : Average worst searching time = 1.086 miliseconds | ||
+ | : Average adding time = 7.15 microseconds | ||
+ | : Accuracy = 100%</pre> | ||
+ | It's a pretty close race despite the code structure and details of the search algorithm being rather different. Sets a new record for fastest tree with sorted output anyway, and I should be able to improve the results further... --[[User:Rednaxela|Rednaxela]] 07:25, 4 March 2010 (UTC) | ||
+ | : I take it the output from the iterator is sorted, so I assume you may have to search additional buckets to find the next nearest without completely exhausting the current one, how can you do that without much of a performance hit, do you just make the list internally and supply the values as needed and then grab additional batches as needed? --[[User:Chase-san|Chase]] 10:15, 4 March 2010 (UTC) | ||
+ | :: Pretty much. It's procedure is mostly like follows: | ||
+ | ::# Could any nodes in the pendingPaths heap have points closer to the search point than the closest point in the evaluatedPoints heap? If yes, do the following. Loop it so long as this condition is true. | ||
+ | ::## Pop the smallest distance node out of pendingPaths, and descend it down the "first guess" path according to what splits lead closer to the search point | ||
+ | ::##* During the descent, put the branches not taken into the pendingPaths heap, with the distance between that path's bounding box and the search point computed. | ||
+ | ::## At the bottom of the descent, iterate through all points, computing their distance to the search point | ||
+ | ::##* Insert the point in the evaluatedPoints heap if either 1) The size of evaluatedPoints is less than the max number of points remaining to return, or 2) the distance is smaller than the largest distance in evaluatedPoints | ||
+ | ::##* After each insertion, if the size of evaluatedPoints is greater than the max number of points remaining to return, remove the largest point from evaluatedPoints | ||
+ | ::# Pop the smallest distance result out of evaluatedPoints and return it. | ||
+ | :: Interestingly, even if I pass the iterator a 'max points to return' that is essentially unlimited, as would allow iterating the whole tree, it's merely 30% slower to get the nearest 40 points, still faster any tree other than Duyn's and my own. --[[User:Rednaxela|Rednaxela]] 15:06, 4 March 2010 (UTC) | ||
+ | |||
+ | Adding "replaceMax" and "replaceMin" methods to the interval heap helped performance a bit: | ||
+ | <pre>RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (Unsorted) >> | ||
+ | : Average searching time = 0.061 miliseconds | ||
+ | : Average worst searching time = 1.281 miliseconds | ||
+ | : Average adding time = 7.23 microseconds | ||
+ | : Accuracy = 100% | ||
+ | |||
+ | RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (Sorted) >> | ||
+ | : Average searching time = 0.062 miliseconds | ||
+ | : Average worst searching time = 1.372 miliseconds | ||
+ | : Average adding time = 7.23 microseconds | ||
+ | : Accuracy = 100% | ||
+ | |||
+ | RESULT << k-nearest neighbours search with Red's "Next Gen" kd-tree (Storted) >> | ||
+ | : Average searching time = 0.059 miliseconds | ||
+ | : Average worst searching time = 1.119 miliseconds | ||
+ | : Average adding time = 7.7 microseconds | ||
+ | : Accuracy = 100% | ||
+ | |||
+ | |||
+ | BEST RESULT: | ||
+ | - #1 Red's "Next Gen" kd-tree (Sorted) [0.0591] | ||
+ | - #2 Rednaxela's Bucket kd-tree (Unsorted) [0.061] | ||
+ | - #3 Rednaxela's Bucket kd-tree (Sorted) [0.0625]</pre> | ||
+ | --[[User:Rednaxela|Rednaxela]] 20:06, 4 March 2010 (UTC) | ||
+ | |||
+ | Interestingly, the performance improvement is more obvious on my netbook: | ||
+ | <pre>RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (unsorted) >> | ||
+ | : Average searching time = 0.301 miliseconds | ||
+ | : Average worst searching time = 19.547 miliseconds | ||
+ | : Average adding time = 14.88 microseconds | ||
+ | : Accuracy = 100% | ||
+ | |||
+ | RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (sorted) >> | ||
+ | : Average searching time = 0.313 miliseconds | ||
+ | : Average worst searching time = 19.032 miliseconds | ||
+ | : Average adding time = 15.06 microseconds | ||
+ | : Accuracy = 100% | ||
+ | |||
+ | RESULT << k-nearest neighbours search with Red's "Next Gen" kd-tree (sorted) >> | ||
+ | : Average searching time = 0.283 miliseconds | ||
+ | : Average worst searching time = 18.02 miliseconds | ||
+ | : Average adding time = 15.24 microseconds | ||
+ | : Accuracy = 100% | ||
+ | |||
+ | |||
+ | BEST RESULT: | ||
+ | - #1 Red's "Next Gen" kd-tree (sorted) [0.2832] | ||
+ | - #2 Rednaxela's Bucket kd-tree (unsorted) [0.3009] | ||
+ | - #3 Rednaxela's Bucket kd-tree (sorted) [0.3134]</pre>Note, both this and the above tests are done with 100 iterations, which seems to give a fair bit of accuracy. Also... it looks like the worst search times increase on my netbook by a much greater factor than the average does. I wonder what the cause of that is... --[[User:Rednaxela|Rednaxela]] 20:40, 4 March 2010 (UTC) | ||
+ | |||
+ | Ouch! I just tested on a school computer, and the ratio of the average-worst-search-time to the average-search-time got over 4x worse, 19.4ms:0.05ms! I have no clue why... the only theories I have is that the computers with the slow worst times are on OpenJDK instead of normal JDK6. Whether it's the fault of javac or the jvm is unknown. I doubt it's related to CPU cache size because the netbook (Intel Atom N270) has a 512kb cache when the school computer (Intel Core2 6300) has 2048kb cache yet has a larger discrepancy. --[[User:Rednaxela|Rednaxela]] 01:37, 5 March 2010 (UTC) | ||
+ | |||
+ | == R-tree variants? == | ||
+ | |||
+ | Has anyone taken a look at the R-tree variants like the R* tree? I'm currently thinking of trying to adapt my 3rd gen tree (currently unreleased) into an R-tree variant. That means that overlap between nodes will be allowed, it will cause smaller bounding boxes and also make it easy to make into a 3-ary or 4-ary tree instead of 2-ary tree. My current 3rd gen implementation happens to be coded in a way that would make conversion fairly easy I think... And my ideas for pending improvements (implicit subtree instead of bucket, and dimension pruning), should also be equally applicable to the overlap-allowing R-tree variants. The tighter bounding boxes at the cost of overlap, and wider branching, may well be worth it... --[[User:Rednaxela|Rednaxela]] 19:28, 5 March 2010 (UTC) | ||
+ | |||
+ | Well, a quick hack to use an R-tree-like insertion rule (instead of inserting points on the 'correct' side of the split, it inserts points where it'll cause the volume of the child node to increase the least. This leads to nodes with less volume, but possible overlap (violates normal kd-tree rules). The performance change was negligible in either direction. Now I just need to replace KD-tree-style node splitting, with B-tree-style self balancing mechanics, which should give a good result I hope. After that, I plan to try some of the 'forced reinsertion' voodoo of the R*-tree. --[[User:Rednaxela|Rednaxela]] 18:21, 6 March 2010 (UTC) | ||
+ | |||
+ | Well, I got a R-tree working, but unfortunately it's performance was was quite poor... Unsure if the issue is a bug or if that's just how it has to be. I tried adding 'forced reinsertion' of tree nodes in R*-tree style which seemed to help some but performance, while improved, was still poor. I might try forced reinsertion of data points as well, but I have doubts it would improve results to compete with my kd-bucket-tree. I'm also thinking of changing the node-selection algorithm from the standard "whatever increases volume the least" like R-trees normally do, into something that discourages overlap of nodes more, which may help. --[[User:Rednaxela|Rednaxela]] 17:16, 10 March 2010 (UTC) | ||
+ | |||
+ | Haven't had much of a chance to work on this since the last post, but I did find that the current R-tree variant I have is hitting 25%ish percent of nodes, far too many. Going to indeed first try the forced reinsertion of points firstly to see if that helps. After that, I'll try something that discourages overlap more which... oh... just happens to be what the "[wikipedia:X-tree|X-tree]" does. It's encouraging to find that types of approaches that I think about going down, have already been found to be promising by others. --[[User:Rednaxela|Rednaxela]] 18:44, 29 March 2010 (UTC) | ||
+ | |||
+ | Tried the reinsertion of leaf points as well, didn't help much. I'm now suspecting I have a bug in my implementation. The code is also getting a little messy. As such, I'm going to re-do the bulk of the R-tree code without basing it so much on the kD-Tree code this time. I need to both catch bugs and make the code cleaner before I feel comfortable trying out other changes like what an "X-Tree" implements. Progress is slow due to class getting busy. --[[User:Rednaxela|Rednaxela]] 02:01, 7 April 2010 (UTC) | ||
+ | |||
+ | == Compressing unused dimensions == | ||
+ | |||
+ | So for an experiment, that was listed as 'Dimension-pruned distance calculations', I made a variant of my tree that compresses point data in a simple way that I expected to improve performance: In leaf nodes, the the 'points' array will omit axis whose value are the same for all points. This means some memory savings, plus it means that when adding all of those points, it calculates a 'base' distance from the the values in the 'unused' dimensions, and then calculates the additional distance for each individual point only on the dimensions necessary. It seemed that this could in theory gain some performance by skipping a couple of dimensions in each distance calculation for an individual point. Unfortunately it seems that the cost of for each node making a new 'SearchPoint' array with dimensions matching the active ones in the leaf node, outweighs the distance calculation time saved. I may be able to optimize this overhead out somewhat but I'm not so optimistic about this path anymore. The possible gains will be sure to be small with normal data sets so I'm not sure it's worth trying to squeeze the drop of performance out of it. Onto other approaches for now... --[[User:Rednaxela|Rednaxela]] 17:38, 10 March 2010 (UTC) | ||
+ | |||
+ | == Tree differences == | ||
+ | |||
+ | I switched to your tree in my KNN classifier. Execution time for 1175 battles went from 855s to 582s, pretty sweet. (HOT takes 251, to give a baseline.) I was concerned at first that it hit 19 less shots out of 1,282,681 - not to say that isn't negligible, but out of concern that the results should be identical. But with this rounded off data, my guess is that it's probably just choosing some different points when two are identical, so I'm not gonna waste time investigating it. Anyway, good stuff. =) --[[User:Voidious|Voidious]] 23:25, 15 March 2010 (UTC) | ||
+ | |||
+ | : Glad to see it's working well for you. Yeah, I wouldn't be surprised if that difference, one note is that if I remember correctly, the tree will (currently) prefer the older data when there are a bunch of duplicates. Just out of curiosity, which version of the tree are you using? My [http://bitbucket.org/rednaxela/knn-benchmark/src/tip/ags/utils/dataStructures/trees/thirdGenKD/ rewrite] is slightly faster past the currently posted on the page here. --[[User:Rednaxela|Rednaxela]] 06:58, 18 March 2010 (UTC) | ||
+ | |||
+ | : I'm using the old one (from this page). I didn't realize the new one was available - I have some plans for that iterator. =) Thanks. --[[User:Voidious|Voidious]] 13:59, 18 March 2010 (UTC) | ||
+ | |||
+ | : Hey btw, in the new tree, Eclipse yells at me that your two @Override's in SquareEuclideanDistanceFunction and the 4 in NearestNeighborIterator are Errors, since those methods don't override a superclass. --[[User:Voidious|Voidious]] 15:51, 20 March 2010 (UTC) | ||
+ | |||
+ | :: In Java 6, @Override applies to interfaces as well. I've noticed Eclipse has bugs with this however and doesn't support it even when using Java 6. Anything except Eclipse likes it. Might change it I suppose though... --[[User:Rednaxela|Rednaxela]] 16:12, 20 March 2010 (UTC) | ||
+ | |||
+ | == 3rd gen tree licence == | ||
+ | |||
+ | Is you 3rd gen kd-tree found in the mercurial repo licensed under the same zlip license? I found no license notice in your third gen kd-tree source code. --[[User:Nat|<span style="color:#099;">Nat</span>]] [[User talk:Nat|<span style="color:#0a5;">Pavasant</span>]] 13:38, 17 April 2010 (UTC) | ||
+ | |||
+ | Hmm yeah, I forgot to deal with that. Sure, that same zlib license works. --[[User:Rednaxela|Rednaxela]] 14:56, 17 April 2010 (UTC) | ||
+ | |||
+ | == Translating... reimplementing... etc == | ||
+ | |||
+ | In June/July I'm going to be starting my thesis, a Simultaneous Localisation And Mapping algorithm, and in order to run at a half decent speed I'm probably going to need to need some form of 3D tree to project and compare different robot poses with the current scan readings of the environment. Anyway, being in an academic environment, Matlab (embedded in Labview) is the preferred language that I would implement my algorithm in. I'm not sure if you've worked with it, but it has very fast array operations despite the actual code itself being quite slow. I was wondering what changes you would make to the general design of your Kd-tree if array operations (eg. <code>y = sin(x)</code> where y and x are arrays) yielded a speedup of 50x or so versus accessing each component individually in a sort of <code>for x_i in x: y_i = sin(x_i)</code> situation. Any major re-factoring? I'm going to need some sort of KNN implementation along with a cutoff threshold. Any thoughts or insights would be appreciated before I commit myself too deeply along one implementation =) Thanks --[[User:Skilgannon|Skilgannon]] 18:08, 15 April 2011 (UTC) | ||
+ | |||
+ | Hmm... interesting question... I've worked with Matlab some, but have not yet had reason to use it very much, so it's not a language I'm particularly fluent in yet. As far as optimizing for that criteria... | ||
+ | # The simplest place to start would be translating simple things that iterate over axis, into array operations. It should be trivial to do so for the distance calculation and selection of which axis to split on. Then other things like loops over the entries in buckets could be turned into array operations. | ||
+ | # Probably the *biggest* factor in all of this is bucket size. With array operations being tremendously superior to normal ones, it would certainly be the case that much larger bucket sizes would be more optimal for a matlab implementation. Maybe 10x bucket size? It would really take some playing around to find what works best. | ||
+ | # Does matlab have native implementations of heaps? Size-limited heaps are rather good for the task of incrementally creating a list of "top n values", which I take advantage of in my tree implementation in a few places. An implementation of a heap in matlab code would not be worth it however. If there is a native heap implementation use it, otherwise it'll probably be faster to stick other sorted list strategies. | ||
+ | # In my code I have a good number of "fail fast" checks that I've experimentally found to improve performance. In Java/C/etc, they're well worth it because often one can quickly rule out a condition before doing a slower check such as one that requires a distance calculation. In Matlab many of these "fail fast" checks will probably have more cost than a distance calculation, so wouldn't be worth it. | ||
+ | # Pay close attention to general matlab performance advice like [http://blogs.mathworks.com/loren/2008/06/25/speeding-up-matlab-applications/ here] | ||
+ | # Perhaps consider some other approaches like various variants of the [http://en.wikipedia.org/wiki/R-tree R-Tree]? I don't see the bucket kd-tree used too often in literature, but yet I could never get my r-tree variant attempts to perform anywhere as near as well as the bucket kd-tree approach. Nevertheless, I think such approaches definitely warrant further investigation. | ||
+ | |||
+ | Do keep in mind that the gains you'd see from a tree compared to the naive approach will be much smaller in a matlab implementation than a C or Java implementation, since the naive approach is simple and well suited to vectorization. It could be the case that a tree won't be worth it at all in Matlab, unless the size of the dataset if particularly huge. | ||
+ | |||
+ | If the performance of an optimized tree is really needed, I believe one can usually link C/C++ code to matlab code. Could this be considered as an option? I've seen such [http://info.ee.surrey.ac.uk/Personal/Z.Kalal/tld.html some things] in academic context which are primarily written in matlab but use C/C++ code in some performance critical spots. I seems to me that a search tree really is the perfect sort of thing to put in C/C++ code, because it's design doesn't affect the output of the algorithm, just the speed. In fact, when the tree is not the focus of a thesis, I might argue that keeping the search tree in C makes it's application more clear, because someone looking at how the tree is applied may not necessarily be concerned with the nitty-gritty of the n-nearest-neighbors search beyond knowing "and here it does a fast n-nearest-neighbors search" since it's output will be the same whether it's naive or fast. That's just my judgement though, I'm sure many might disagree with me, haha. | ||
+ | |||
+ | Does that help? :) --[[User:Rednaxela|Rednaxela]] 04:20, 16 April 2011 (UTC) | ||
+ | |||
+ | That's pretty much it exactly =) I'm estimating my initial 'warm up' scan will add around 1.4 million points, so a naive approach isn't really an option =). One other thing I was thinking about would be if I had a line, rather than a point, how I would store that in a Kd-tree? Put in a whole bunch of points to make up the locus? Reason being, I have several options to store where has known walls and where is known open space. One of them was using lines which essentially trace out my lasers fired from my range finder, with one end marked as 'solid' and the rest marked as 'open space'. As such, when I test a new point cloud in a certain position to see how well it fits, if the line (rather than just the endpoint) is in the top N closest to a point in the cloud it needs to be returned as well. Thanks for all your help =) --[[User:Skilgannon|Skilgannon]] 06:25, 18 April 2011 (UTC) | ||
+ | |||
+ | Well, one doesn't need to use a whole bunch of points to do lines. There are two main ways that come to mind to mind to deal with non-point objects (lines, or anything else really): | ||
+ | # Consider a tree that explicitly tracks the bounding boxes of nodes, and uses them instead of split value during the search (mine does this for performance enhancement reasons). Place the lines/objects in the tree based on their center point (doesn't have to be their center necessarily, but seems like as good a choice as any). Then expand the tracked bounding boxes based on the bounding box of the line/object. This means your nodes will have overlapping bounding boxes, but it'll still work really. | ||
+ | # or, alternatively, place an entry for the object/line in every node that "fits". If a line spans over a split, insert a copy on each side. If you are tracking bounding boxes, crop them at the split so that they won't overlap. This approach leads do much duplication and more memory use, but may have better performance during the search due to the lack of overlapping nodes. On the other hand, this approach also requires extra handling to make sure you don't get duplicates in your search results. | ||
+ | Actually, those two approaches are inspired by what I've seen of [[wikipedia:R-Tree|R-Tree]] variants. It's common to use them for storing non-point objects for spatial search. R-Trees inherently have overlap, and the [[wikipedia:R+ Tree|r+ tree]] essentially takes the duplication approach I mentioned to deal with it, whereas other variants of the r-tree take the approach of putting just into the one node that "fits best". As a note to compare, the kd-tree is to the plain binary search tree, as the r-tree is to the b-tree. If your lines start introducing overlap in your nodes particularly often, I'd really take a good look into r-tree variants because the literature for them is done with overlap in mind (since r-trees inherently tend to have nodes with overlap anyway). --[[User:Rednaxela|Rednaxela]] 12:24, 18 April 2011 (UTC) |
Latest revision as of 13:34, 18 April 2011
So this mean I can use it int close-sourced robot? » Nat | Talk » 13:24, 25 August 2009 (UTC)
- Yep, so long as you don't misrepresent it's origin (i.e. claim you wrote the kd-tree in use) :) --Rednaxela 13:29, 25 August 2009 (UTC)
- You used to say that it will be licensed under RWPCL and CC-BY, but I'm more happy with this though, for that I can release the code under more tricky way than usual in-jar method =) » Nat | Talk » 14:25, 25 August 2009 (UTC)
- I decided I wanted to avoid as much complication as possible, and I found the zlib licence. Since I don't mind how people use it so long as they don't make false claims to have written it, and it doesn't conflict with the RWPCL (i.e. can be used in RWPCL bots), it seemed like a good match. I removed one clause that I deemed unnecessary. It seems reasonable to me :) --Rednaxela 14:46, 25 August 2009 (UTC)
I just tried this tree, and I'm getting problems where every now and again an Entry will have a null value in it. I've got workaround code for it, but I know it shouldn't be doing that, and I'm not sure why it is. Also, DrussGT is still skipping a LOT of turns with this tree, so I'll have to find a dimension or 3 to cull... I could even run 2 trees with 8 dimensions in less time than my 1 tree with 11. Oh yes, I changed the 2 distance functions to be Manhattan distance, but that shouldn't affect things too much. --Skilgannon 18:21, 26 August 2009 (UTC)
- Huh... that seems odd. I don't see why null entries could ever happen, I'll look into it when I get home. As far as DrussGT skipping a lot of turns still, I assume you don't mean any turns than normal do you? And yeah, even a few dimensions will make a huge difference with any kd-tree. --Rednaxela 18:33, 26 August 2009 (UTC)
- Yeah, I know it passed all the benchmarks etc. so I'm not sure what's going on. Maybe look in DrussGT 1.3.10wilo for the version I'm using if you want to test it. Lines 614 to 618 of DrussGunDC.java are where my workaround is (basically just removing all Entries with null values from the cluster). If I take that out it starts throwing errors every few rounds. (Note: just to prevent any confusion, due to me adding the first workaround - just skipping null values in the loop - it now throws the errors down on line 663 when it sorts the Indices instead of in the loop when trying to access the data inside the null value). Thanks for any light you can shed on this =) --Skilgannon 18:55, 26 August 2009 (UTC)
- Aha! I haven't tested it yet, but this should fix the nulls. :) --Rednaxela 19:23, 26 August 2009 (UTC)
- Oh, and also, this bug may do more than cause nulls: It may cause duplicate entries in some cases I suspect, so it may affect the results of jk.mega.DrussGT 1.3.10wilo. I don't think it wouldn't affect the results of RougeDC willow though because I'm pretty sure that bug was introduced after. --Rednaxela 19:29, 26 August 2009 (UTC)
I believe I have found the following problem, Arrays.copyOf is only supported since Java 6.0, so your code can't be compiled/run under earlier versions of Java. --Positive 22:49, 27 August 2009 (UTC)
- Haha, Fnl also noticed that earlier today. Fixed and tested to have no/negligable affect on performance :) --Rednaxela 00:10, 28 August 2009 (UTC)
Hey Rednaxela, good job! One thing that you might improve the speed on (if I understand your tree correctly) is the sqrPointRegionDist function. You don't need to recalculate all distances between bounds and the point, only between the *changed* bound since last check. If that makes sense. :) Also, I think it would be nice to have another version of the tree which doesn't apply weighing while searching. In any case, it's a very nice piece of code, and I'm going to try to use it in Portia (thanks to your license!). :) --Positive 21:45, 27 August 2009 (UTC)
- Hmm... you're right, that is an optimization that could be done, that is, if the overhead of caching it doesn't outweigh the benefit. I think in order to implement that I'd need to either store temporary values in the nodes, or maintain a third stack that remembers the distance of it's bounds. I'll give it a try and keep that change if it's worth it, haha. As far as a version that doesn't apply weighting, I've considered that however decided it was simpler to maintain a single version. If you want to remove the weighting capabilities, all it takes is 1) Removing the 'weights' variables where they are declared, 2) Removing the reference to them in the distance functions, and 3) Remove the couple statements that set it. Though, I suppose that if changes to the tree cease to happen for a long time I'll post the version with weighting removed as well, I just don't want to be worrying about them getting out of sync while it's in (semi-)active development. Anyways, I'm glad it's liked :) --Rednaxela 00:10, 28 August 2009 (UTC)
- Okay, that makes sense. I have some other feature requests: A size() function to keep track of how many entries there are in the tree, and it'd be cool if there was something to make sure the size stays below a certain point. A possible problem might be the line with tree.weights = weights;, because it doesn't reset the weights afterwards. Also, I'd selfishly like the addPoint functions and such to be nonstatic and without the KdTree<T> tree parameter (but I suppose that is a matter of taste). :P --Positive 01:03, 28 August 2009 (UTC)
- Alright, I gave that optimization of calculating the node distance as-needed from the splits a try. The result is that: 1) If replace the existing calculating of the distance to the tight boundary, the performance is worse, and 2) If I just use it to shortcut past the full distance to tight boundary calculation, the performance is about the same. This indicates to me that 1) Calculating the exact tight-boundary disance for nodes eliminates a rather significant number of node visits, and 2) The overhead of tracking the data for split-based node distance is large enough that it's not worth it to use it as a shortcut. --Rednaxela 06:23, 1 September 2009 (UTC)
- Keeping the weights across calls I considered desired, but it was a bad API for it. It's now split into a setWeights() call. Haha, I don't know what I was thinking when I made the addPoint and nearestNeighbours functions static, I was in a strange mindset when I first wrote this beast. They're now non-static and tested to have no impact on performance. I'm not sure what you mean by "make sure size stays below a certain point". Do you mean removing old points? --Rednaxela 03:39, 28 August 2009 (UTC)
- Great. :) Yes, that's exactly what I meant. It seems usefull and safe to have some kind of deletePoint function and perhaps a built-in linked list system to remove old entries, so that the data stays up to date & won't eventually fill up the memory. --Positive 03:48, 28 August 2009 (UTC)
- Well... support for an optional limit on size is now supported. Deleting arbitrary points however, is not supported, because that would conflict with the size limit since it would be far too painfully slow to remove that point from the linked list that tracks the the order that points were added in. Also, to make sure this doesn't impact speed normally, that list is never even created if no limit was specified. Anyways, the size of the code is getting closer to 500 lines than I'd like it to, so I think this is enough non-optimization features for any sane usage of it. --Rednaxela 04:45, 28 August 2009 (UTC)
I don't know if this is standard or not, but it seems strange that the constructors isn't the very first method in the class. And the method/sub-class ordering is more like... DrussGT's =) They are now ordering by the time you added the code and where, aren't they? I know it isn't going to improve the execution speed, but... » Nat | Talk » 14:40, 28 August 2009 (UTC)
- Nah, not ordered by the time added at all. At one point I had the functions for 'bounds' grouped beside the variables, but they kind of drifted apart. There wasn't any real reason for the ordering except that nearby things often had some relevance to eachother. Reformatted that a bit now as to be saner and maintain that relevance-ordering :) --Rednaxela 14:56, 28 August 2009 (UTC)
I've been testing with the pre-limit version of your tree in Portia, and it seems to be working. :) Now with your new version, I do get an error at line double[] location = this.locationStack.pop();: (The method pop() is undefined for the type LinkedList<double[]>). --Positive 15:53, 28 August 2009 (UTC)
- Ugh... Any reason why you're on ancient Java 5? Anyway, that method in LinkedList only exists in 6 not 5. Making it Java 5 compatible again... --Rednaxela 16:11, 28 August 2009 (UTC)
- Haha, actually I'm on Java 6. I compile everything using the Java 5 library though, so I know for sure I don't make code that's not Java 5 compatible. :) --Positive 16:14, 28 August 2009 (UTC)
I didn't exactly had time to look at this, but just out of curiosity, does this tree return the scans sorted by distance? By the way, congratulations on your optimization work.--Navajo 00:43, 1 September 2009 (UTC)
- It returns the entries in max-heap array ordering, since that is the most efficent way to construct the list of nearest neighbours. Maintaining a list of "lowest n values" doesn't actually require the list to be fully ordered, it only requires enough ordering that it's easy to throw out the largest value when over the size limit, making a max-heap the perfect structure for the task. It would be fairly simple and fast to convert this to squential order for final output, but I saw no reason to bother with it since I've never seen a DC gun/movement care about the order of the outputs, only the distances associated with them. --Rednaxela 01:25, 1 September 2009 (UTC)
- My DC Gun does care about the order of the output. It computes pif angles discarding the ones that end outside the battlefield, so I need to know that I end up with the closest entries possible. --ABC 10:15, 1 September 2009 (UTC)
- Err... I'm not sure I understand... Any sane DC-PIF gun would be discarding result that end outside the battlefield like you say, but it's not as if the order of output affects which ones end up inside/outside the battlefield. The outputted values are ensured the closest entries possible ones, just not in sequential order. I don't see how any gun could sanely care about the output order, but if so, which order would you prefer, ascending or descending? --Rednaxela 12:39, 1 September 2009 (UTC)
- I'm speculating, but maybe he builds a cluster of size M and then tries to keep the best N<M that stay inside the field. But I think that a sorted cluster should be optional as it may impact the performance for most of the tree users with no gain. --zyx 13:24, 1 September 2009 (UTC)
- Yes, that's exactly what I do. If it's too much trouble I can always sort them myself. --ABC 14:09, 1 September 2009 (UTC)
- Nah, it's not too much trouble, and as I explain to Nat below, it's quicker for me to get sorted output from the heap than it is for you to sort the data after. I already have this implemented on my computer at home, all I have to do is test how it impacts performance, and if it's measurable, make it optional. --Rednaxela 14:52, 1 September 2009 (UTC)
- Hey ABC, unlike what you said... Simonton's tree may not always output in descending order always! Reading the code for Simonton's code indicates that it uses a PriorityQueue, and the order it outputs is the same as the iterator of PriorityQueue. If you read the java docs, it very clearly states "
The Iterator provided in method iterator() is not guaranteed to traverse the elements of the PriorityQueue in any particular order. If you need ordered traversal, consider using Arrays.sort(pq.toArray()).
", therefore if Simonton's tree is currently returning in descending order, then it just happens that your JVM does that. This means that the behaviour of versions of Shadow using Simonton's tree are VM-dependant. But after I make the sorting supported in my tree when I get home, Shadow using my new tree will be able to be sure it's performance is not dependant on the JVM in question. --Rednaxela 15:21, 1 September 2009 (UTC)
- Hey ABC, unlike what you said... Simonton's tree may not always output in descending order always! Reading the code for Simonton's code indicates that it uses a PriorityQueue, and the order it outputs is the same as the iterator of PriorityQueue. If you read the java docs, it very clearly states "
- Thanks for that info! You are a true Java Guru. At the time I just blindly plugged Simonton's tree instead of my brute force method, noticed the entries were "backwards" and adjusted my code accordingly... --ABC 15:53, 1 September 2009 (UTC)
- I don't understand, how the order affect the result of PIF algorithm? On the other hand, returning the sorted data would take O(n log n) time more due the heap, and that definitely effect the bot that doesn't care about the ordering. If we sort them ourselves, it would still take O(n log n). I see no point to add the ordering to the tree, unless, of course, it won't effect the speed. » Nat | Talk » 14:23, 1 September 2009 (UTC)
- I want to find the 10 scans closest to the current one that end inside the battlefield. I ask the tree for the 20 closest scans, and then calculate the path for each of them, starting from the closest one, until I get 10 valid ones. --ABC 14:39, 1 September 2009 (UTC)
- (edit conflict, crossing out redundant things)
It was like Zyx said. The numbers are made up but it's like this: Shadow gets the 50 closest entries from the kD-Tree, it then runs PIF on each entry starting with the one with the last distance, and it stops processing once it has the PIF results of 10 that didn't run into walls. Again, 50 and 10 are made up numbers for sake of example.Also Nat, you're very wrong about the speed there. Firstly, the time taken by reordering the n final results, is MUCH quicker than the nearest-n-neighbours search itself, because O(n log n) only tells you how the speed is multiplied by changing the number of values sorted, it doesn't tell you the base speed, which happens to be far quicker than the base speed of the nearest-n-neighbours algorithm. Secondly, I can make the output results sorted in notably LESS than O(n log n) time. Converting max-heap ordering to sequential ordering, only requires the second part of a heapsort, rather than a full run of a sorting algorithm. --Rednaxela 14:52, 1 September 2009 (UTC)
- (edit conflict, crossing out redundant things)
- I just checked and your made up numbers (50/10) are exactly what I currently use in Shadow's melee gun. :) --ABC 15:53, 1 September 2009 (UTC)
- Oh wow, my guessing skills are quite good it seems, haha. As far as speed of that goes Nat, I'm pretty sure that in most DC-PIF guns, the PIF takes considerably longer than the DC. --Rednaxela 16:08, 1 September 2009 (UTC)
- Merge sort, which is default for Arrays.sort() is twice as fast as the heap sort according to my book on algorithms, which has the benchmarking time printed. And the second part of heap sort is still O(n log n), you still need to do down-heapify for n times. Unless there are faster ways to do it. » Nat | Talk » 15:13, 1 September 2009 (UTC)
- True, the second part of heap sort still scales at a rate of O(n log n) but should be considerably faster than a full heapsort. I may benchmark the difference later, but I'm extremely doubtful that Arrays.sort() could be faster for data that is already in heap ordering. Plus Nat, one very important consideration that you leave out, is that in order to use Arrays.sort(), unless you're sorting a simple array of numbers, your input values need to be encapsulated in objects which implement Comparable, and comparisons of Comparable are well over 5x slower than direct numerical comparison due to function call overhead. Avoiding overhead such as that is part of why my tree is just so damn fast ;) --Rednaxela 15:31, 1 September 2009 (UTC)
I'm not sure where in this page I should post this since there are so many topics here, but anyway, while trying to improve Gibbs I found two situations in which your tree throws exceptions. First is when searching for 0 neighbors. This one I easily solved by adding a simple if(count < 1) return new ArrayList<Entry<T>>(0);
at the beginning of the nearestNeighbor method. The second problem is when you limit the tree size to something smaller than bucketSize. When the method removeOld() is called it throws a NullPointerEception when checking the condition of the following loop
do {
cursor.locationCount--;
cursor = cursor.parent;
} while (cursor.parent != null);
which is easily solved by changing it to
while (cursor != null){
cursor.locationCount--;
cursor = cursor.parent;
}
I know these problems are quite specific and easy to solve, but I just felt like I should report them. Also, if it is not much of a trouble, could you please add an option to remove the closest scan instead of the oldest? --Navajo 01:15, 13 April 2010 (UTC)
Well, the version currently posted on this page I kind of consider deprecated in favor of my new version. The new version is slightly faster, is more flexibly coded, happens to behave right for the zero-sized search, and adds an additional feature of providing an iterator for allowing incremental search (but still best to specify an explicit max search size for performance reasons). On the other hand, the new tree doesn't yet have this support for removing old/close data. I could add this to it shortly if there is demand though. Would you want to use that Navajo? :) --Rednaxela 01:59, 13 April 2010 (UTC)
I'm planning to test the impact of removing the closest instead of the oldest scan, but I can add this feature to the tree myself if it is too much of a trouble to you. --Navajo 03:12, 13 April 2010 (UTC)
Contents
Warning! Rounding Errors!
Argh! It seems rounding errors are evil! Evil evil evil! I was toying with my tree in a very lightly segmented gun where exact-same-locations will occur frequently and it did that stupid rampent/infinite branching again. The cause? It turns out to be rounding, indicated by this excerpt from some debugging messages.
... Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 Split Dimension: 1, Split Value: 1.0, Range: 0.9999999999999999 to 1.0, Width: -1.1102230246251565E-16 ...
When the only values in the dimension were 1.0, and just BARELY below it, the splitValue would be set to the average of those two values. Due to rounding errors, the average happened to be the same as the higher value... causing the tree to lump all values into the left child node and try branching again, and again, and again :(
So as a warning to those currently using the tree, this obscure sitaution could cause the tree to lock up... I'll release a fix soon --Rednaxela 00:54, 30 August 2009 (UTC)
- Simonton's fix to this was to only allow his tree a certain recursive depth (he used 500), after which it would start throwing away values. This way it was also (sort of) possible to have a tree with 'rolling averages' by setting the max recursive depth very low. However, it's more of a bandaid then a fix for the root of the actual problem. --Skilgannon 13:03, 30 August 2009 (UTC)
- For this particular issue, that would indeed be a very very poor bandaid, and anyways I have a proper fix in place now. As for having a limit on tree size, I very strongly prefer the limit on the number of entries I have implemented. While it won't put a hard limit on tree depth or nodes it will generally to keep it within reasonable bounds and will always toss out old data in the exact order it was entered in. --Rednaxela 16:07, 30 August 2009 (UTC)
Well, this sucks, I thought I could fix it simply by making a > into a >= because I thought it always would round one way.... turns out it doesn't... it rounds whatever way it's in the mood for it seems.. bah. Oh and by the way ABC, curse Shadow for hitting such almost-1.0 velocitities as to cause rounding issues :P --Rednaxela 02:22, 30 August 2009 (UTC)
I'm sure there are lots of ways to deal with this, but just a comment on how my insert method handles a similar situation. Values less than the split go to the left, values greater go to the right, and values equal to the split go to alternating sides. One other implication is that when removing a node, you have to check both sides when the value is equal to the split value, but that's easy enough. --Voidious 02:33, 30 August 2009 (UTC)
- I guess if you had 19 0.9999's and one 1.0 and it used 1.0 as the split, you'd still have this problem unless the first alternating side also alternated. So I suppose a different special case would probably work a lot better. Another thing that sucks about this, at least if you use the tree the way I do... Rounding every number to, say, 10 digits seems pretty reasonable, but I wouldn't want to do that either, because I count on the fact that the neighbors returned from the tree are the same exact objects I inserted. :-/ --Voidious 02:43, 30 August 2009 (UTC)
(edit conflict) I saw that alternating approach, but I really don't like that approach for several reasons. It increases the balance of the tree very slightly, but it comes at the cost of making each node wider than they need to be. Not only deletes have to search more nodes Voidious, but seraches as well. For example, presume a dimension only has values of 0.0, 0.5, and 1.0 and the first split in the tree is at 0.5. Then the full search has to be repeated completely on both sides, and thus could have a huge impact if such a split happens to be the first split in the whole tree. Anyways, I now have a fix, the solution is: If split value is equal to the maximum node value (will only occur if they're so close that no Double value between could exist), then it sets the split value to the minimum node value instead which guarantees some values will be put in each node. Extensive testing is showing this is working quite nicely :) --Rednaxela 02:48, 30 August 2009 (UTC)
- Ah, cool. I agree it can unbalance the tree, but I don't think it makes searches any slower (otherwise). Mine definitely doesn't account for the equal-to-split edge case in its "findLeaf" - I recall thinking it would need to account for that, then realizing it didn't.
- When coming back up the tree, you test if the other side of the tree could have a node with a distance to the search point that is lower than some threshold. Since the value could be infinitely close to the split value, you have to just assume it is the split value, right? So while you may take a wrong turn while descending the tree if you hit values equal to the split value, you'll check the other side on the way back up, which you're just as likely to have had to do anyway. In your example, you'd need to search the other side anyway - know what I mean? It's been a while since I wrote my tree, maybe I'm misunderstanding something... =) But I'm pretty sure it's just as accurate and just as fast for searches.
- Deletes need to find an actual value that's already in the tree, though, which is a different scenario. (And I don't particularly care how fast deletes are.) --Voidious 03:11, 30 August 2009 (UTC)
- Which reminds me, I actually wrote this case for another scenario, which might be a problem in your current solution, too: if all the nodes have the same value for the split dimension. --Voidious 03:19, 30 August 2009 (UTC)
- I presume you mean all entries, not all nodes. In that case yes, I already deal with that case. I always set the split dimension to the widest dimension, and whenever "widest" is still 0 width, then it will simply double the bucket size for that particular node. My logic is that the scenario you describe could only happen with "split on widest dimension" when that all entries are in exactly the same spot, in which case there could never ever be any performance gain by putting them in seperate nodes. On a related note, one possible optimization I could also do with this, would be giving such nodes a special marking and avoiding redundent distance calculations in them. I haven't bothered with that yet though since I doubt it would occur often in practice with a well-segmented gun. --Rednaxela 04:51, 30 August 2009 (UTC)
- Actually Voidious, in my tree checking the other side anyway wouldn't happen. I keep track of an extra-tight bounding box for each node and compare to that, and due to how my splitting doesn't alternate, the tight bounding box will never overlap on the split value actually. So.. you're right about some trees, but not my one :) --Rednaxela 04:51, 30 August 2009 (UTC)
- Which reminds me, I actually wrote this case for another scenario, which might be a problem in your current solution, too: if all the nodes have the same value for the split dimension. --Voidious 03:19, 30 August 2009 (UTC)
Some major changes
Man, this tree keeps taking away from time to finish my melee gun, but anyways: I've done a major update now which gives the tree a bit nicer and more flexible of an API. In particular, you now construct it with things such as new KdTree.SqrElucid<T>(dimensions, sizeLimit)
or new KdTree.WeightedManhattan<T>(dimensions, sizeLimit)
instead of new KdTree<T>(dimensions, sizeLimit)
and modifying the distance function in your copy of the code (I'm looking at you Skilgannon :)). This also makes it easy to use the unweighted version which is faster by 5-10%. One interesting note I found, is that while Manhattan distance is less complicated to calculate, it makes it harder for the tree to eliminate branches and thus is twice as slow (Looking at you again Skilgannon, considering your recent concerns with DrussGT's speed). What do people think? I'm also considering making a stripped down 'lite' version with no weighting, only sqrElucid distance, some of the more agressive and less significant optimizations removed, no size limiting, and be nearly as fast. --Rednaxela 05:52, 2 September 2009 (UTC)
Hmm, so using Manhattan approximately doubles the number of branches that must be recursed? Is this dimension-dependent, ie., would the fact that I'm using 11 dimensions make this a factor of more than 2? I'd gladly switch to using Euclidean distance to get this speed increase, but from what I've found, my gun scores significantly better using Manhattan distance. The difference between the two is about 0.5 worth of TCRM score. I'm guessing that how my dimensions are defined somehow favors the use of Manhattan over Euclidean, but I don't currently have the time to tweak my segments into a form that works well for Euclidean instead =) Another point, for bots that only use one weighting scheme throughout the match, isn't it quicker to multiply the point by the weighting before storing it, and then afterwards treat the dimensions as unweighted? This is what I've been doing in DrussGT since way back when... it also means that I can apply non-linear weightings without any performance hit at runtime. --Skilgannon 08:50, 2 September 2009 (UTC)
- I didn't run an exact count of the number of branches that must be recursed, but since the time taken is doubled, I feel pretty sure the number of branches recursed is at least doubled. It being possibly dimension-dependant might indeed be the case but I don't think 11 dimensions would make it more than my test showed, since my test was with the 13-dimension data recorded from Diamond. About weighting: yes, of course it's quicker to multiply the point by weighting it before storing it. The weighted versions of the tree are only intended for bots which do dynamic weighting, like Diamond does (melee weighting vs 1v1 weighting). Static weighting is considered to be something that bots are responsible for doing when they create their dimensions in the first place. --Rednaxela 12:42, 2 September 2009 (UTC)
I haven't tested it yet, but I would like to say, fantastic job! This definitely gives "everyone what he/she wants". I only wonder, don't abstract classes make the tree that much slower (because of the overhead in having to decide which distance() function to use for example)? I'm not a java expert, so it might be that that is optimized at runtime. --Positive 14:29, 2 September 2009 (UTC)
- Thanks! I was unsure about that myself at first, but I tried it, gave a it a good performance test, and it performed exactly the same it seemed, at least in Java 6 (I don't know for certain but old enough VMs I bet wouldn't perform equally with them). In fact, the slight tweak of having the distance functions read 'weights' from the class instead of passing it in as an argument seemed to cause this revision to have a very slight improvement in performance. --Rednaxela 14:44, 2 September 2009 (UTC)
- Great, you've really done your homework. :) I just looked over the code, and one little thingie: It seems to me you could use a regular "int" for sizelimit instead of Integer. (You can check for sizeLimit==0 in instead of sizeLimit==null). Might give a small improvement. :P --Positive 14:58, 2 September 2009 (UTC)
- Hmm, true I could do that. That wouldn't have much of a performance impact though I'd think, considering how
== 0
and== null
would be the same speed and the numerical comparison when actually checking the value only happens once per insertion call. I might make the change anyway though, considering the way I'm not really fond of the wrapper classes like Integer to begin with. --Rednaxela 15:09, 2 September 2009 (UTC)
- Hmm, true I could do that. That wouldn't have much of a performance impact though I'd think, considering how
I know I shouldn't burden you with any more feature requests, but I'd really like the following extension to your tree (because it is a lot less elegant to do outside of the class): As an addition to nearestNeighbor, I'd like to give an argument like (double)acceptedLowest[],(double)acceptedHighest[], that prevents the tree from returning any matches with a dimension value outside of the specified range. (That feature would be awesome in a new sologun I'm trying to make for Portia!) --Positive 19:45, 2 September 2009 (UTC)
- Hmm... well... that would be easy enough to add, except for one issue: The proper/efficient way to implement it would require a third distance function: Longest distance from a point to the inside of a region. I think the subclasses that implement the distance calculations are rather bloated/repetitive as it is really, without having to add that as well. So yeah, feel free to modify it for your usage, but adding that particular feature to the main version just adds too much bulk I think. --Rednaxela 00:24, 3 September 2009 (UTC)
- Well... I gave it a good try but couldn't work the bugs out of it, nor were perliminary performance indications looking promising. --Rednaxela 15:37, 5 September 2009 (UTC)
Spelling
Isn't 'Elucidian' supposed to be 'Euclidean'? » Nat | Talk » 14:46, 5 September 2009 (UTC)
- Yep, you're right :) --Rednaxela 15:37, 5 September 2009 (UTC)
help with the Java
Hi Rednaxela, I'm interested in learning your kdTree and playing with the possible applications. I was wondering if you or someOne might point me in the direction of an open source bot that uses your Tree... robocode is my only experience programing , a proper example of it implemented would greatly help and make my learning curve alot more fun. Thx --Jlm0924 20:04, 17 September 2009 (UTC)
Skilgannon uses it in DrussGT, which is open source. --Voidious 22:06, 17 September 2009 (UTC)
- I wouldn't recomend RougeDC willow as a reference for usage of this kD-Tree, as it still used an old (and subtly buggy) version. DrussGT would be a good example indeed. When I get around to releasing Glacier that will also be an example. --Rednaxela 23:02, 17 September 2009 (UTC)
- Yeah, take a look at DrussGT. I use it in two different places, both the gun and the movement. In the gun it is used to find similar situations for finding an angle to shoot at. In the movement it is used to guess the bullet power the enemy will use to fire. Once you've extracted the source look at jk/mega/dgun/DrussGunDC.java and search for 'heapTree'. It should be fairly obvious where I create the tree, where I'm adding the data to the tree and where I retrieve the data from the tree based on how close it is to the current scan. Perhaps it would even be good to write a simple gun showing how the various trees are used, as a reference. --Skilgannon 09:08, 18 September 2009 (UTC)
- I think having a simple reference implementation is a really good idea. Though if by "various trees" you mean including those besides Rednaxela's, right now I'm not sure why anyone would choose anything else. =) I'm already debating optimizing mine or just switching to his. --Voidious 13:02, 18 September 2009 (UTC)
- I'm taking a look now... a couple of reference implementations in a simple bot would be a gold mine though ! DrussGt is overWhelming to me :) I get some sleep and take a better look :) Thx guys --Jlm0924 16:00, 18 September 2009 (UTC)
sequentialSorting
Hey Rednaxela, still an awesome tree, but perhaps you could add a little note to the tree about how the sequentialSorting works... I only just found out it sorts highest distance first. :P --Positive 10:36, 24 September 2009 (UTC)
Yeah, it might be good to make a note of that somewhere. It was done that way to both 1) Be consistant with how ABC (falsely) throught Simonton's one was working, and 2) That's the order that values can naturally be extracted from the heap)
- Okay, got it, no problem. I've done a little alteration to your code to turn the order around, all fixed. :) --Positive 16:34, 24 September 2009 (UTC)
Range Search
Can I ask that you add this as well. This is part of some standard KD-Tree implementations in other languages. You can probably read over how it works in my tree, but adding support for it shouldn't cause problems. It can be used in say, older style pattern matchers for getting matches to the current enemy state within a hyper-rectangular bounds (say in this case velocity and heading delta). --Chase 02:49, 3 March 2010 (UTC)
Sure, wouldn't be hard to add in my rewrite. It might be a while till I get around to that though, since I'm really anxious to try my (possibly completely novel?) ideas for making it faster and such. --Rednaxela 03:27, 3 March 2010 (UTC)
- Actually I had tested some of those on my tree the other day while trying to optimize it, the "choosing a better dimension if the current one is only one value" actually produced decent gains assuming you chose a good dimension. In my first one I chose the one with the biggest difference, but I didn't like it, even though it worked better than my "iterate through dimensions in order till you find one that has more then one value", my guess is the more values a dimension has, the better (but that is hard to track). None of the data I posted is from those, but it got around 0.3 to the listed maps 0.4. --Chase 03:55, 3 March 2010 (UTC)
- Are you talking about the dimension to split on? That's very different than the things I have in my plans section. As far as what dimension to split on and where to split in it, I already do "middle of the dimension with the widest variance" (which just today I noticed some research papers conjecturing that it is perhaps the optimal kd-tree splitting method). --Rednaxela 04:06, 3 March 2010 (UTC)
- Oh, alright, thats what you meant when you say that. Hehe, I guess I am more than a bit out of date. --Chase 04:40, 3 March 2010 (UTC)
- Yeah, what I meant with the "dimension-pruned" things, is that the calculation of distance between the search point and each point in a bucket, can be made to not repeat the part of the summation for dimensions where the node has a width of 0, instead calculating that part of the summation once for every point in the bucket. --Rednaxela 05:12, 3 March 2010 (UTC)
- Oh, alright, thats what you meant when you say that. Hehe, I guess I am more than a bit out of date. --Chase 04:40, 3 March 2010 (UTC)
- Are you talking about the dimension to split on? That's very different than the things I have in my plans section. As far as what dimension to split on and where to split in it, I already do "middle of the dimension with the widest variance" (which just today I noticed some research papers conjecturing that it is perhaps the optimal kd-tree splitting method). --Rednaxela 04:06, 3 March 2010 (UTC)
Rewrite progress
Well, I have my rewrite largely done now. So, to those who thought it would be neat to have a search iterator... I now have it! At no significant performance penalty in fact, though it does operate best when you give it a a 'max iteration length', but if you stop before that, you still gain some time be it never descends into nodes until they have a possibility be required for next(). The reason it's still good to provide a 'max iteration length' is so that can routinely prune it's list of points it has evaluated. In order to make this possible efficiently, I coded up a "Interval Heap" double-ended queue that is very fast. Results look encouraging so far, however I didn't see the gain I hopes to see from it's more flexible path selection code. See the following results:
RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (Unsorted Output) >> : Average searching time = 0.059 miliseconds : Average worst searching time = 0.899 miliseconds : Average adding time = 7.15 microseconds : Accuracy = 100% RESULT << k-nearest neighbours search with Red's "Next Gen" kd-tree (Sorted Output) >> : Average searching time = 0.061 miliseconds : Average worst searching time = 0.936 miliseconds : Average adding time = 7.47 microseconds : Accuracy = 100% RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (Sorted Output) >> : Average searching time = 0.061 miliseconds : Average worst searching time = 1.086 miliseconds : Average adding time = 7.15 microseconds : Accuracy = 100%
It's a pretty close race despite the code structure and details of the search algorithm being rather different. Sets a new record for fastest tree with sorted output anyway, and I should be able to improve the results further... --Rednaxela 07:25, 4 March 2010 (UTC)
- I take it the output from the iterator is sorted, so I assume you may have to search additional buckets to find the next nearest without completely exhausting the current one, how can you do that without much of a performance hit, do you just make the list internally and supply the values as needed and then grab additional batches as needed? --Chase 10:15, 4 March 2010 (UTC)
- Pretty much. It's procedure is mostly like follows:
- Could any nodes in the pendingPaths heap have points closer to the search point than the closest point in the evaluatedPoints heap? If yes, do the following. Loop it so long as this condition is true.
- Pop the smallest distance node out of pendingPaths, and descend it down the "first guess" path according to what splits lead closer to the search point
- During the descent, put the branches not taken into the pendingPaths heap, with the distance between that path's bounding box and the search point computed.
- At the bottom of the descent, iterate through all points, computing their distance to the search point
- Insert the point in the evaluatedPoints heap if either 1) The size of evaluatedPoints is less than the max number of points remaining to return, or 2) the distance is smaller than the largest distance in evaluatedPoints
- After each insertion, if the size of evaluatedPoints is greater than the max number of points remaining to return, remove the largest point from evaluatedPoints
- Pop the smallest distance node out of pendingPaths, and descend it down the "first guess" path according to what splits lead closer to the search point
- Pop the smallest distance result out of evaluatedPoints and return it.
- Could any nodes in the pendingPaths heap have points closer to the search point than the closest point in the evaluatedPoints heap? If yes, do the following. Loop it so long as this condition is true.
- Interestingly, even if I pass the iterator a 'max points to return' that is essentially unlimited, as would allow iterating the whole tree, it's merely 30% slower to get the nearest 40 points, still faster any tree other than Duyn's and my own. --Rednaxela 15:06, 4 March 2010 (UTC)
- Pretty much. It's procedure is mostly like follows:
Adding "replaceMax" and "replaceMin" methods to the interval heap helped performance a bit:
RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (Unsorted) >> : Average searching time = 0.061 miliseconds : Average worst searching time = 1.281 miliseconds : Average adding time = 7.23 microseconds : Accuracy = 100% RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (Sorted) >> : Average searching time = 0.062 miliseconds : Average worst searching time = 1.372 miliseconds : Average adding time = 7.23 microseconds : Accuracy = 100% RESULT << k-nearest neighbours search with Red's "Next Gen" kd-tree (Storted) >> : Average searching time = 0.059 miliseconds : Average worst searching time = 1.119 miliseconds : Average adding time = 7.7 microseconds : Accuracy = 100% BEST RESULT: - #1 Red's "Next Gen" kd-tree (Sorted) [0.0591] - #2 Rednaxela's Bucket kd-tree (Unsorted) [0.061] - #3 Rednaxela's Bucket kd-tree (Sorted) [0.0625]
--Rednaxela 20:06, 4 March 2010 (UTC)
Interestingly, the performance improvement is more obvious on my netbook:
RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (unsorted) >> : Average searching time = 0.301 miliseconds : Average worst searching time = 19.547 miliseconds : Average adding time = 14.88 microseconds : Accuracy = 100% RESULT << k-nearest neighbours search with Rednaxela's Bucket kd-tree (sorted) >> : Average searching time = 0.313 miliseconds : Average worst searching time = 19.032 miliseconds : Average adding time = 15.06 microseconds : Accuracy = 100% RESULT << k-nearest neighbours search with Red's "Next Gen" kd-tree (sorted) >> : Average searching time = 0.283 miliseconds : Average worst searching time = 18.02 miliseconds : Average adding time = 15.24 microseconds : Accuracy = 100% BEST RESULT: - #1 Red's "Next Gen" kd-tree (sorted) [0.2832] - #2 Rednaxela's Bucket kd-tree (unsorted) [0.3009] - #3 Rednaxela's Bucket kd-tree (sorted) [0.3134]
Note, both this and the above tests are done with 100 iterations, which seems to give a fair bit of accuracy. Also... it looks like the worst search times increase on my netbook by a much greater factor than the average does. I wonder what the cause of that is... --Rednaxela 20:40, 4 March 2010 (UTC)
Ouch! I just tested on a school computer, and the ratio of the average-worst-search-time to the average-search-time got over 4x worse, 19.4ms:0.05ms! I have no clue why... the only theories I have is that the computers with the slow worst times are on OpenJDK instead of normal JDK6. Whether it's the fault of javac or the jvm is unknown. I doubt it's related to CPU cache size because the netbook (Intel Atom N270) has a 512kb cache when the school computer (Intel Core2 6300) has 2048kb cache yet has a larger discrepancy. --Rednaxela 01:37, 5 March 2010 (UTC)
R-tree variants?
Has anyone taken a look at the R-tree variants like the R* tree? I'm currently thinking of trying to adapt my 3rd gen tree (currently unreleased) into an R-tree variant. That means that overlap between nodes will be allowed, it will cause smaller bounding boxes and also make it easy to make into a 3-ary or 4-ary tree instead of 2-ary tree. My current 3rd gen implementation happens to be coded in a way that would make conversion fairly easy I think... And my ideas for pending improvements (implicit subtree instead of bucket, and dimension pruning), should also be equally applicable to the overlap-allowing R-tree variants. The tighter bounding boxes at the cost of overlap, and wider branching, may well be worth it... --Rednaxela 19:28, 5 March 2010 (UTC)
Well, a quick hack to use an R-tree-like insertion rule (instead of inserting points on the 'correct' side of the split, it inserts points where it'll cause the volume of the child node to increase the least. This leads to nodes with less volume, but possible overlap (violates normal kd-tree rules). The performance change was negligible in either direction. Now I just need to replace KD-tree-style node splitting, with B-tree-style self balancing mechanics, which should give a good result I hope. After that, I plan to try some of the 'forced reinsertion' voodoo of the R*-tree. --Rednaxela 18:21, 6 March 2010 (UTC)
Well, I got a R-tree working, but unfortunately it's performance was was quite poor... Unsure if the issue is a bug or if that's just how it has to be. I tried adding 'forced reinsertion' of tree nodes in R*-tree style which seemed to help some but performance, while improved, was still poor. I might try forced reinsertion of data points as well, but I have doubts it would improve results to compete with my kd-bucket-tree. I'm also thinking of changing the node-selection algorithm from the standard "whatever increases volume the least" like R-trees normally do, into something that discourages overlap of nodes more, which may help. --Rednaxela 17:16, 10 March 2010 (UTC)
Haven't had much of a chance to work on this since the last post, but I did find that the current R-tree variant I have is hitting 25%ish percent of nodes, far too many. Going to indeed first try the forced reinsertion of points firstly to see if that helps. After that, I'll try something that discourages overlap more which... oh... just happens to be what the "[wikipedia:X-tree|X-tree]" does. It's encouraging to find that types of approaches that I think about going down, have already been found to be promising by others. --Rednaxela 18:44, 29 March 2010 (UTC)
Tried the reinsertion of leaf points as well, didn't help much. I'm now suspecting I have a bug in my implementation. The code is also getting a little messy. As such, I'm going to re-do the bulk of the R-tree code without basing it so much on the kD-Tree code this time. I need to both catch bugs and make the code cleaner before I feel comfortable trying out other changes like what an "X-Tree" implements. Progress is slow due to class getting busy. --Rednaxela 02:01, 7 April 2010 (UTC)
Compressing unused dimensions
So for an experiment, that was listed as 'Dimension-pruned distance calculations', I made a variant of my tree that compresses point data in a simple way that I expected to improve performance: In leaf nodes, the the 'points' array will omit axis whose value are the same for all points. This means some memory savings, plus it means that when adding all of those points, it calculates a 'base' distance from the the values in the 'unused' dimensions, and then calculates the additional distance for each individual point only on the dimensions necessary. It seemed that this could in theory gain some performance by skipping a couple of dimensions in each distance calculation for an individual point. Unfortunately it seems that the cost of for each node making a new 'SearchPoint' array with dimensions matching the active ones in the leaf node, outweighs the distance calculation time saved. I may be able to optimize this overhead out somewhat but I'm not so optimistic about this path anymore. The possible gains will be sure to be small with normal data sets so I'm not sure it's worth trying to squeeze the drop of performance out of it. Onto other approaches for now... --Rednaxela 17:38, 10 March 2010 (UTC)
Tree differences
I switched to your tree in my KNN classifier. Execution time for 1175 battles went from 855s to 582s, pretty sweet. (HOT takes 251, to give a baseline.) I was concerned at first that it hit 19 less shots out of 1,282,681 - not to say that isn't negligible, but out of concern that the results should be identical. But with this rounded off data, my guess is that it's probably just choosing some different points when two are identical, so I'm not gonna waste time investigating it. Anyway, good stuff. =) --Voidious 23:25, 15 March 2010 (UTC)
- Glad to see it's working well for you. Yeah, I wouldn't be surprised if that difference, one note is that if I remember correctly, the tree will (currently) prefer the older data when there are a bunch of duplicates. Just out of curiosity, which version of the tree are you using? My rewrite is slightly faster past the currently posted on the page here. --Rednaxela 06:58, 18 March 2010 (UTC)
- I'm using the old one (from this page). I didn't realize the new one was available - I have some plans for that iterator. =) Thanks. --Voidious 13:59, 18 March 2010 (UTC)
- Hey btw, in the new tree, Eclipse yells at me that your two @Override's in SquareEuclideanDistanceFunction and the 4 in NearestNeighborIterator are Errors, since those methods don't override a superclass. --Voidious 15:51, 20 March 2010 (UTC)
- In Java 6, @Override applies to interfaces as well. I've noticed Eclipse has bugs with this however and doesn't support it even when using Java 6. Anything except Eclipse likes it. Might change it I suppose though... --Rednaxela 16:12, 20 March 2010 (UTC)
3rd gen tree licence
Is you 3rd gen kd-tree found in the mercurial repo licensed under the same zlip license? I found no license notice in your third gen kd-tree source code. --Nat Pavasant 13:38, 17 April 2010 (UTC)
Hmm yeah, I forgot to deal with that. Sure, that same zlib license works. --Rednaxela 14:56, 17 April 2010 (UTC)
Translating... reimplementing... etc
In June/July I'm going to be starting my thesis, a Simultaneous Localisation And Mapping algorithm, and in order to run at a half decent speed I'm probably going to need to need some form of 3D tree to project and compare different robot poses with the current scan readings of the environment. Anyway, being in an academic environment, Matlab (embedded in Labview) is the preferred language that I would implement my algorithm in. I'm not sure if you've worked with it, but it has very fast array operations despite the actual code itself being quite slow. I was wondering what changes you would make to the general design of your Kd-tree if array operations (eg. y = sin(x)
where y and x are arrays) yielded a speedup of 50x or so versus accessing each component individually in a sort of for x_i in x: y_i = sin(x_i)
situation. Any major re-factoring? I'm going to need some sort of KNN implementation along with a cutoff threshold. Any thoughts or insights would be appreciated before I commit myself too deeply along one implementation =) Thanks --Skilgannon 18:08, 15 April 2011 (UTC)
Hmm... interesting question... I've worked with Matlab some, but have not yet had reason to use it very much, so it's not a language I'm particularly fluent in yet. As far as optimizing for that criteria...
- The simplest place to start would be translating simple things that iterate over axis, into array operations. It should be trivial to do so for the distance calculation and selection of which axis to split on. Then other things like loops over the entries in buckets could be turned into array operations.
- Probably the *biggest* factor in all of this is bucket size. With array operations being tremendously superior to normal ones, it would certainly be the case that much larger bucket sizes would be more optimal for a matlab implementation. Maybe 10x bucket size? It would really take some playing around to find what works best.
- Does matlab have native implementations of heaps? Size-limited heaps are rather good for the task of incrementally creating a list of "top n values", which I take advantage of in my tree implementation in a few places. An implementation of a heap in matlab code would not be worth it however. If there is a native heap implementation use it, otherwise it'll probably be faster to stick other sorted list strategies.
- In my code I have a good number of "fail fast" checks that I've experimentally found to improve performance. In Java/C/etc, they're well worth it because often one can quickly rule out a condition before doing a slower check such as one that requires a distance calculation. In Matlab many of these "fail fast" checks will probably have more cost than a distance calculation, so wouldn't be worth it.
- Pay close attention to general matlab performance advice like here
- Perhaps consider some other approaches like various variants of the R-Tree? I don't see the bucket kd-tree used too often in literature, but yet I could never get my r-tree variant attempts to perform anywhere as near as well as the bucket kd-tree approach. Nevertheless, I think such approaches definitely warrant further investigation.
Do keep in mind that the gains you'd see from a tree compared to the naive approach will be much smaller in a matlab implementation than a C or Java implementation, since the naive approach is simple and well suited to vectorization. It could be the case that a tree won't be worth it at all in Matlab, unless the size of the dataset if particularly huge.
If the performance of an optimized tree is really needed, I believe one can usually link C/C++ code to matlab code. Could this be considered as an option? I've seen such some things in academic context which are primarily written in matlab but use C/C++ code in some performance critical spots. I seems to me that a search tree really is the perfect sort of thing to put in C/C++ code, because it's design doesn't affect the output of the algorithm, just the speed. In fact, when the tree is not the focus of a thesis, I might argue that keeping the search tree in C makes it's application more clear, because someone looking at how the tree is applied may not necessarily be concerned with the nitty-gritty of the n-nearest-neighbors search beyond knowing "and here it does a fast n-nearest-neighbors search" since it's output will be the same whether it's naive or fast. That's just my judgement though, I'm sure many might disagree with me, haha.
Does that help? :) --Rednaxela 04:20, 16 April 2011 (UTC)
That's pretty much it exactly =) I'm estimating my initial 'warm up' scan will add around 1.4 million points, so a naive approach isn't really an option =). One other thing I was thinking about would be if I had a line, rather than a point, how I would store that in a Kd-tree? Put in a whole bunch of points to make up the locus? Reason being, I have several options to store where has known walls and where is known open space. One of them was using lines which essentially trace out my lasers fired from my range finder, with one end marked as 'solid' and the rest marked as 'open space'. As such, when I test a new point cloud in a certain position to see how well it fits, if the line (rather than just the endpoint) is in the top N closest to a point in the cloud it needs to be returned as well. Thanks for all your help =) --Skilgannon 06:25, 18 April 2011 (UTC)
Well, one doesn't need to use a whole bunch of points to do lines. There are two main ways that come to mind to mind to deal with non-point objects (lines, or anything else really):
- Consider a tree that explicitly tracks the bounding boxes of nodes, and uses them instead of split value during the search (mine does this for performance enhancement reasons). Place the lines/objects in the tree based on their center point (doesn't have to be their center necessarily, but seems like as good a choice as any). Then expand the tracked bounding boxes based on the bounding box of the line/object. This means your nodes will have overlapping bounding boxes, but it'll still work really.
- or, alternatively, place an entry for the object/line in every node that "fits". If a line spans over a split, insert a copy on each side. If you are tracking bounding boxes, crop them at the split so that they won't overlap. This approach leads do much duplication and more memory use, but may have better performance during the search due to the lack of overlapping nodes. On the other hand, this approach also requires extra handling to make sure you don't get duplicates in your search results.
Actually, those two approaches are inspired by what I've seen of R-Tree variants. It's common to use them for storing non-point objects for spatial search. R-Trees inherently have overlap, and the r+ tree essentially takes the duplication approach I mentioned to deal with it, whereas other variants of the r-tree take the approach of putting just into the one node that "fits best". As a note to compare, the kd-tree is to the plain binary search tree, as the r-tree is to the b-tree. If your lines start introducing overlap in your nodes particularly often, I'd really take a good look into r-tree variants because the literature for them is done with overlap in mind (since r-trees inherently tend to have nodes with overlap anyway). --Rednaxela 12:24, 18 April 2011 (UTC)