View source for User talk:AW/kD-Tree

From Robowiki
Jump to navigation Jump to search

Happy Easter! I have written my recursionless tree, but I am getting unexpected results (My tree is faster than rednaxela's by an unbelievable margin, and the results are different in robocode) I haven't been able figure this out and I am wondering if I am using rednaxela's tree incorrectly? Here is my tester class:

package tree;

import ags.utils.KdTree.SqrEuclid;

public class KDTreeTester {
	static KDTree gunTree = new KDTree(8);
	static SqrEuclid<double[]> AgsTree = new SqrEuclid<double[]>(8, 400000);
	
	public static void main(String[] args) {
		int numOfPoints = 40000;
		long startTime = System.nanoTime();
		 for(int i = 1; i < numOfPoints; i++) {
			 DataPoint addPoint = new DataPoint(8, i);
			 addPoint.setCoordinates(0, Math.random());
			 addPoint.setCoordinates(1, Math.random());
			 addPoint.setCoordinates(2, Math.random());
			 addPoint.setCoordinates(3, Math.random());
			 addPoint.setCoordinates(4, Math.random());
			 addPoint.setCoordinates(5, Math.random());
			 addPoint.setCoordinates(6, Math.random());
			 addPoint.setCoordinates(7, Math.random());
			 gunTree.addPoint(addPoint);
			 }
			 DataPoint SearchPoint = new DataPoint(8, 50);
			 SearchPoint.setCoordinates(0, 0.2);
			 SearchPoint.setCoordinates(1, 0.1);
			 SearchPoint.setCoordinates(2, 0.6);
			 SearchPoint.setCoordinates(3, 0.9);
			 SearchPoint.setCoordinates(4, 0.2);
			 SearchPoint.setCoordinates(5, 0.7);
			 SearchPoint.setCoordinates(6, 0.3);
			 SearchPoint.setCoordinates(7, 0.5);
			
			 System.out.println("time elapsed building = " + ((System.nanoTime() -
			 startTime) * (1E-6)) + " milliseconds");
			 startTime = System.nanoTime();
//			  out.println(gunTree.getNearestNeighbor(SearchPoint).angle);
			 
//			 out.println(gunTree.getNearestNeighbor(SearchPoint).getDistance(SearchPoint));
			
			 gunTree.getNearestNeighbor(SearchPoint);
			 System.out.println("time elapsed searching= " + ((System.nanoTime() -
			 startTime) * (1E-6)));
			 
			 
//			 
//			 
//			 
			 startTime = System.nanoTime();
				
			 for(int i = 1; i < numOfPoints; i++) {
			 double[] addPoint = new double[8];
			 addPoint[0] = Math.random();
			 addPoint[1] = Math.random();
			 addPoint[2] = Math.random();
			 addPoint[3] = Math.random();
			 addPoint[4] = Math.random();
			 addPoint[5] = Math.random();
			 addPoint[6] = Math.random();
			 addPoint[7] = Math.random();
			 
			 double[] trash = new double[1];
			trash[0] = 0.589;
			 
			 AgsTree.addPoint(addPoint, trash);
			 }
			
			 double[] AgsSearchPoint = new double[8];
			 AgsSearchPoint[0] = 0.4;
			 AgsSearchPoint[1] = 0.5;
			 AgsSearchPoint[2] = 0.8;
			 AgsSearchPoint[3] = 0.2;
			 AgsSearchPoint[4] = 0.4;
			 AgsSearchPoint[5] = 0.2;
			 AgsSearchPoint[6] = 0.1;
			 AgsSearchPoint[7] = 0.9;
			
			
			 System.out.println("time elapsed building = " + ((System.nanoTime() -
			 startTime) * (1E-6)) + " milliseconds");
			 startTime = System.nanoTime();
//			  out.println(gunTree.getNearestNeighbor(SearchPoint).angle);
//			 
//			 out.println(gunTree.getNearestNeighbor(SearchPoint).getDistance(SearchPoint));
			
			 AgsTree.nearestNeighbor(AgsSearchPoint, 1, false);
			 System.out.println("time elapsed searching= " + ((System.nanoTime() -
			 startTime) * (1E-6)));
	}
}

Thanks and God bless you, --AW 21:22, 24 April 2011 (UTC)

Well, you're not using my tree incorrectly except that "new SqrEuclid<double[]>(8, 400000);" should probably be "new SqrEuclid<double[]>(8, null);". The second parameter is only used when size-limited trees are desired, and that has extra processing overhead. That probably doesn't make a big different though.

You are however only timing one single run of the search, and that's rather poor test methodology, particularly with how Java's JIT compiler works. The first run of any piece of code will always be slow, because Java's JIT compiler only optimizes after later runs of methods. I don't expect my code to perform well when testing with such an unrealistically small number of searches. I'd highly suggest running the code from the "get source" link here. The framework has been well-tested to test both speed and accuracy of trees in conditions similar to normal use in Robocode. It also has code to allow two modes:

  1. Run "dummy" runs that "don't count" first, to let the JIT complier finish with everything. This eliminates the effect of the JIT's delay from the test
  2. Run each test iteration in a new JVM. This ensures every iteration is equally influenced by the JIT on average.

When benchmarking in Java, one really needs to be careful to consider the influence of the JIT compiler, as it can radically sway the results. Even if it weren't for that you still need thousands of searches for an accurate speed measurement.

I'd also wonder if you've tested the accuracy of your kd-tree. It can be very easy to have some kinds of bugs that dramatically improve speed but lead to incorrect output on occasion.

--Rednaxela 00:41, 25 April 2011 (UTC)

Firstly, thanks for having me check the accuracy of my tree. I have now fixed that, and assuming I didn't break it again while optimizing I think I have the fastest kD-Tree without bounding boxes, but my tree is slower than yours:

AW time (milliseconds per point) = 0.23410419200000002

AW visits = 1018805

AGS time (milliseconds per point) = 0.15207064

AGS visits = 403029

As you can see, the problem is that yours visits fewer leaf nodes than mine, due to the fact that you have the bounding box. I'll see how I can do when I add that to my tree. Also, I haven't been able to get the KNN-benchmark running on my computer, not that I have tried hard, but that's why I am using my own benchmark here (10,000 iterations with 200 discarded first). --AW 17:57, 28 April 2011 (UTC)

Alright, so I am now regularly outperforming you on random data. However, there are some things to keep in mind: I need to add more features which will slow it down (I am using a search that will get the nearest point, you are using one that will get the n nearest points with n set to one, I am unsure how big of a difference that makes), and performance on random data does not necessarily represent performance in robocode. Also, the difference is so small that it should make no difference in robocode, but I think I could optimize a bit more. Finally, while designing this came up with a new way to preform nearest neighbor searches that theoretically could give better performance. I may eventually see how that goes, but I feel like working directly on robots at the moment, so it will have to wait.

AW time (milliseconds per point) = 0.159079984

AGS time (milliseconds per point) = 0.16103140800000001

--AW 23:59, 12 May 2011 (UTC)

Nice stuff. I'm also unsure how big the overhead is for how I get the n nearest instead of 1 nearest, but it's possible it might be enough to explain the margin.

In reply to your earlier comment "yours visits fewer leaf nodes than mine, due to the fact that you have the bounding box", I do not believe the bounding box is the only reason. I also transverse the tree in an unconventional fashion, as a trick to reduce the count of visited leaf nodes.

After finishing with a branch of the tree, instead of checking it's sibling, I descend a path from the "best untaken branch" based on a queue (implemented as a minheap) of untaken branches. The "best" untaken branch is based on the distance between the search point and the closest possible point in the branch. - Essentially, it's a hybrid of depth-first and best-first transversal styles. It starts to do a does a depth-first search, but replaces "check the sibling" with "jump to best".

The reasoning behind this, is that it should help in cases where the search point is near to a split in the tree, so that it doesn't waste too much time in the side of the branch that is superficially the nearest. In my experiments, this novel approach only helped very slightly in overall performance, but I expect it has a larger impact on the leaf node count than on overall performance because it has overhead of it's own.

--Rednaxela 00:31, 13 May 2011 (UTC)

Cool idea! I was counting the leaves checked and the bounding box knocked mine down to about 3300000 if I remember correctly. However I was using it for every branch check rather than merely for the "second guess" checks. This had the side effect of making my tree seem faster than yours before it really was because yours needed to count more leaf nodes. As this made about 5 microseconds difference and my tests indicate that mine is about 3 microseconds faster than yours, I am nearly certain that yours would win with a dedicated "get nearest point" method. So I still have more optimizing to do, but I will take a break to try to write a decent gun using my tree.--AW 01:19, 13 May 2011 (UTC)

My robot is freezing and I thought it could be the kD-Tree, so i have worked on that a bit more. The latest benchmarks:

10,000 pseudo-randomly distributed points in an 8 dimensional space. Searching for the 20 nearest points, 10,000 iterations, 200 discarded first.

AW time (milliseconds per point) = 0.505854624

AGS time (milliseconds per point) = 0.53606438

10,000 pseudo-randomly distributed points in an 8 dimensional space. Searching for the 50 nearest points, 10,000 iterations, 200 discarded first.

AW time (milliseconds per point) = 0.787480048

AGS time (milliseconds per point) = 0.817522996

I think this is beyond the margin of error. However, I still have lots of improvements I want to try. As well as that other search method.--AW 00:14, 18 May 2011 (UTC)

Contents

Thread titleRepliesLast modified
i don't see the KD-TREE115:08, 14 October 2013
Hashmaps vs storing data in the tree514:15, 26 July 2012

i don't see the KD-TREE

i don't see the KD-TREE

Tmservo (talk)21:54, 13 October 2013

The tree's code is in Gilgalad (these tests weren't run with the latest version anyways)

AW (talk)15:08, 14 October 2013
 

Hashmaps vs storing data in the tree

I was working on my kd-tree and one of the changes I made was to store the data in the tree rather than in a hashmap, thinking that this would save time. However, in my benchmarks, it is much faster to use a hashmap. Does anyone know why this would be the case?

AW18:17, 25 July 2012

It's very strange question because hashmap and kd tree is absolutly different structures with different aims and contract and for sure map faster because it's nature. May be you publish source code with usage of map and tree?

Jdev18:38, 25 July 2012
 

O, looks like i misunderstand you. Do you mean why HashMap is faster than TreeMap? I'm not sure but i think, that hash map has efficiency O(1) but tree map O(log N) because tree map are sorted and based on red-black tree. I do not describe how they works because my english skill...

Jdev19:39, 25 July 2012
 

What I mean is that it seems slower to store the data with the point in a KDTree than it does to store the point in the tree and then use the point as the key for the hashmap. So for example you would have:

PointEntry entry = new PointEntry(pointCoordinates, DataObject); tree.add(entry);

instead of

hashmap.add(pointCoordinates, DataObject); tree.add(pointCoordinates);

Maybe I have some typecasting in the tree that is slowing it down?

AW22:17, 25 July 2012
 

Ok, i understand yours problem. But now i have not advices:) can you publish kdtree? Is difference only in type of data stored in kdtree? What type has pointCoordinates?

Jdev03:56, 26 July 2012
 

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:User talk:AW/kD-Tree/Hashmaps vs storing data in the tree/reply (5).