Difference between revisions of "Talk:DemonicRage"

From Robowiki
Jump to navigation Jump to search
(Huge memory requirement ;))
m (Correct Rednaxela :P)
Line 53: Line 53:
  
 
Ahh, so precalculating some distances with a table, kind of like FastTrig. Well, I doubt it'll help much really, particularly for manhattan distance as you have there. Taking differences and summing them is such a quick operation that I think that unless the number of dimensions is huge, the function call overhead of this would make it's benefit minimal if not non-existent. It wouldn't hurt to benchmark it though, maybe I'm being too pessimistic. Since you already have the code here, might as well try it. One note is at least with the number of divisions you specify there, it would take a huge amount of memory really... 80000*80000*4bytesperdouble = 23.84GB --[[User:Rednaxela|Rednaxela]] 19:30, 11 January 2010 (UTC)
 
Ahh, so precalculating some distances with a table, kind of like FastTrig. Well, I doubt it'll help much really, particularly for manhattan distance as you have there. Taking differences and summing them is such a quick operation that I think that unless the number of dimensions is huge, the function call overhead of this would make it's benefit minimal if not non-existent. It wouldn't hurt to benchmark it though, maybe I'm being too pessimistic. Since you already have the code here, might as well try it. One note is at least with the number of divisions you specify there, it would take a huge amount of memory really... 80000*80000*4bytesperdouble = 23.84GB --[[User:Rednaxela|Rednaxela]] 19:30, 11 January 2010 (UTC)
 +
 +
: Not to be a smartass here, but actually a double is 8 bytes. :P So that would actually be 47.68 GB.

Revision as of 23:26, 11 January 2010

Recently I have build DemonicRage a new DCgun from the ground up that uses linked list and targets everyOne..(ver 2.5+) It performs very well untuned, but first I am addressing some speed issues ..(I get skipped turn event after 60,000 scan, but notice poor performance much sooner) I have not tested to confirm, but i believe the distancing calculation is the main bottle neck.. I'm going to try caching the distance calculations. I figure it should work well, the percise distance isn't even important. Do you guys do this when using kd-Trees?? perhaps within a tree? -Jlm0924
I'm not sure what you mean by caching, unless you start doing approximations which would really defeat the whole point of not just using bins like VCS. No distance calculations are cached with Kd-Trees. The point of the Kd-Tree is create groups of scans in a tree, as to be able to discard whole branches of the tree at once without needing to calculate the distance to every scan. --Rednaxela 06:20, 7 January 2010 (UTC)
Each time you aim, you are using a new/current scan, so I'm not sure what kind of caching you can do with that. I would definitely suggest trying out Red's kd-tree, as that should help a lot and probably isn't even that much work. Suboptimal play-it-forward simulation (if you do that) comes to mind, too, or just using too many scans when aiming. For reference: I fire extra gun waves from the locations of every other bot on the field, so I have like 5x as many total scans as you normally would, and I start to hit skipped turns near the end of a 35-round battle (with my own, slower kd-tree). --Voidious 14:51, 7 January 2010 (UTC)
My initial thought with caching was; I only needed to identify the best 300 scans or so.

My initial idea of code is below. Yeah, I'm sure Kd Trees are the best way to go... I still could try something like this to at least filter which scans needed to calculate. ( maybe see how it pans out) -Jlm0924

  
/*	
	public static class fastDistance{
		public static final int DIVISIONS = 80000;
		public static final double K = 1 / DIVISIONS; 		
                public static double[][] manhattanDistanceTable;
		
		public static final void init() {
			if (manhattanDistanceTable == null) {
				manhattanDistanceTable = new double[DIVISIONS][DIVISIONS];
				double d1;
				double d2;
				for (int i=0; i<DIVISIONS; i++) 
			 		for (int j=0; i<DIVISIONS; i++) {
						d1 = i*K;
						d2 = j*K;
						manhattanDistanceTable[i][j] = Math.abs(d1-d2);
					}
				}
			}
		}
		
			
		public static double manhattanDistance(Dist[] d) {
		        double sum = 0;
		        for (int x = 0; x < d.length; x++) {
		            sum += manhattanDistanceTable[(d.value1*80000)][d.value2*80000];
		        }
		        
		        return sum;
		 }

	}
	
	public  class Dist{
		double value1;
		double value2;
		double weight;
	}
	
	
	*/

Ahh, so precalculating some distances with a table, kind of like FastTrig. Well, I doubt it'll help much really, particularly for manhattan distance as you have there. Taking differences and summing them is such a quick operation that I think that unless the number of dimensions is huge, the function call overhead of this would make it's benefit minimal if not non-existent. It wouldn't hurt to benchmark it though, maybe I'm being too pessimistic. Since you already have the code here, might as well try it. One note is at least with the number of divisions you specify there, it would take a huge amount of memory really... 80000*80000*4bytesperdouble = 23.84GB --Rednaxela 19:30, 11 January 2010 (UTC)

Not to be a smartass here, but actually a double is 8 bytes. :P So that would actually be 47.68 GB.