Talk:Gilgalad/movementStrategy

From Robowiki
Jump to navigation Jump to search

So, I've always been a big fan of Tomcat's movement strategy where, if I understand correctly, you use a bunch of classification schemes and then give them scores based on their performance. I tried this a long tie ago with Gilgalad, but it didn't work well. I'm giving it another shot now, and if nothing else, hopefully it will give me a better idea of what classifiers to use and when.


some data against raiko at the end of 35 rounds:

Classifier scores are based on the predicted probability given for a known bullet being within a certain range (for now +or- 0.2 radians). After each known enemy bullet, this is updated and the printed score is classifierNScore/classifier0Score.

Battle 1:


Classifier 0 scores :1.0
Classifier 1 scores :0.9755870469605517
Classifier 2 scores :1.2369225462233957
Classifier 3 scores :1.7528612982062486
Classifier 4 scores :1.5329624603178138
Classifier 5 scores :1.234924887594898
Classifier 6 scores :1.4174531972595874
Classifier 7 scores :1.3371945667211411
Classifier 8 scores :1.4380071717393625
Classifier 9 scores :1.893821500835094
Classifier 10 scores :1.9142428440452988
Classifier 11 scores :1.813500313301308

Battle 2:
Classifier 0 scores :1.0
Classifier 1 scores :1.1423071895729844
Classifier 2 scores :1.3729425281975491
Classifier 3 scores :1.7008483463501145
Classifier 4 scores :1.182167218778795
Classifier 5 scores :1.4489710167607848
Classifier 6 scores :1.458752434946134
Classifier 7 scores :1.4119814269034254
Classifier 8 scores :1.5747806346700741
Classifier 9 scores :1.7756977755746486
Classifier 10 scores :1.8393932242812359
Classifier 11 scores :1.6320145203001994

Battle 3:
Classifier 0 scores :1.0
Classifier 1 scores :0.9247600453962282
Classifier 2 scores :1.1469307013639247
Classifier 3 scores :1.5011101838893393
Classifier 4 scores :1.6256472273752074
Classifier 5 scores :1.5499125684940858
Classifier 6 scores :1.425592000093903
Classifier 7 scores :1.5781481081082624
Classifier 8 scores :1.6686308538501025
Classifier 9 scores :1.7011388408736738
Classifier 10 scores :1.7764882022837205
Classifier 11 scores :1.6438392150877148


the classifiers are set up as follows:

		ClassificationWeightingScheme CWS = new AntiSimpleTargeterWeightingScheme();
		MovementClassifier MC = new KNNMovementClassifier(CWS, 36, 0, 5, 3, 1.0, 
				Double.MIN_VALUE, 0.0, 0.15);

		CWS = new AntiSimpleTargeter2WeightingScheme();
		MC = new KNNMovementClassifier(CWS, 36, 0, 5, 3, 1.0, Double.MIN_VALUE, 0.0, 0.15);

		CWS = new AntiSimpleTargeter3WeightingScheme();
		MC = new KNNMovementClassifier(CWS, 36, 0, 5, 3, 1.0, Double.MIN_VALUE, 0.0, 0.15);
		
		CWS = new AntiStandardTargeterWeightingScheme();
		MC = new KNNMovementClassifier(CWS, 36, 0, 5, 10, 50.0, Double.MIN_VALUE, 0.0, 0.08);

		CWS = new AntiSemiAdvancedTargeterWeightingScheme();
		
		MC = new KNNMovementClassifier(CWS, 36, 1, 5, 5, 50.0, Double.MIN_VALUE, 4.0, 0.08);
		Classifiers.add(MC);
		MC = new KNNMovementClassifier(CWS, 36, 3, 5, 5, 50.0, Double.MIN_VALUE, 4.0, 0.08);
		Classifiers.add(MC);
		MC = new KNNMovementClassifier(CWS, 36, 5, 5, 5, 50.0, Double.MIN_VALUE, 4.0, 0.08);
		Classifiers.add(MC);
		MC = new KNNMovementClassifier(CWS, 36, 10, 5, 5, 50.0, Double.MIN_VALUE, 4.0, 0.08);
		Classifiers.add(MC);
		MC = new KNNMovementClassifier(CWS, 36, 20, 5, 5, 50.0, Double.MIN_VALUE, 4.0, 0.08);
		Classifiers.add(MC);
		MC = new KNNMovementClassifier(CWS, 55, 50, 5, 5, 50.0, Double.MIN_VALUE, 4.0, 0.08);
		Classifiers.add(MC);
		MC = new KNNMovementClassifier(CWS, 105, 100, 5, 5, 50.0, Double.MIN_VALUE, 4.0, 0.08);
		Classifiers.add(MC);
		MC = new KNNMovementClassifier(CWS, 36, 0, 5, 10, 50.0, Double.MIN_VALUE, 4.0, 0.08);
		Classifiers.add(MC);



public class AntiSemiAdvancedTargeterWeightingScheme extends
		ClassificationWeightingScheme {
	
	public AntiSemiAdvancedTargeterWeightingScheme() {
		_weights = new double[] { 4.0, 4.0, 1.5, 2.0, 3.0, 3.0, 3.0, 3.0, 2.0};
//		_weights = new double[] { 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
	}

	@Override
	public double[] getPointCoordinates(DataWave wave) {
		MovementDataWave movementWave = (MovementDataWave) wave;

		double[] dataPointCoordinates = new double[_weights.length];
		dataPointCoordinates[0] = _weights[0] * movementWave.getBulletTravelTime()
				* 0.009166667;
		dataPointCoordinates[1] = _weights[1]
				* Math.min(Math.PI / 2, wave.getAheadWallDist()) / (Math.PI / 2);
		dataPointCoordinates[2] = _weights[2]
				* Math.min(Math.PI / 2, wave.getReverseWallDist()) / (Math.PI / 2);
		dataPointCoordinates[3] = _weights[3]
				* (wave.getVChange() + Rules.DECELERATION)
				/ (Rules.ACCELERATION + Rules.DECELERATION);
		dataPointCoordinates[4] = _weights[4] 
				* Math.sin(movementWave.getRelHeading());
		
		dataPointCoordinates[5] = _weights[5]
				* (Math.cos(movementWave.getRelHeading()) + 1) / 2;
		dataPointCoordinates[6] = _weights[6]
				* Math.abs(wave.getTargetVelocity()) / 8.0;
		dataPointCoordinates[7] = _weights[7] * Math.min(1.0, movementWave.getTicksSinceVelocityChange() / movementWave.getBulletTravelTime());
		dataPointCoordinates[8] = _weights[8] * movementWave.getRobotDistLast10Ticks() * 0.0125;


		return dataPointCoordinates;
	}
}

public class AntiStandardTargeterWeightingScheme extends
		ClassificationWeightingScheme {
	
	public AntiStandardTargeterWeightingScheme() {
		_weights = new double[] { 4.0, 4.0, 1.0, 2.0, 3.0, 3.0, 3.0, 3.0, 2.0};
//		_weights = new double[] { 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
	}

	@Override
	public double[] getPointCoordinates(DataWave wave) {
		MovementDataWave movementWave = (MovementDataWave) wave;

		double[] dataPointCoordinates = new double[_weights.length];
		dataPointCoordinates[0] = _weights[0] * movementWave.getBulletTravelTime()
				* 0.009166667;
		dataPointCoordinates[1] = _weights[1]
				* Math.min(Math.PI / 2, wave.getAheadWallDist()) / (Math.PI / 2);
		dataPointCoordinates[2] = _weights[2]
				* Math.min(Math.PI / 2, wave.getReverseWallDist()) / (Math.PI / 2);
		dataPointCoordinates[3] = _weights[3]
				* (wave.getVChange() + Rules.DECELERATION)
				/ (Rules.ACCELERATION + Rules.DECELERATION);
		dataPointCoordinates[4] = _weights[4] 
				* Math.sin(movementWave.getRelHeading());
		
		dataPointCoordinates[5] = _weights[5]
				* (Math.cos(movementWave.getRelHeading()) + 1) / 2;
		dataPointCoordinates[6] = _weights[6]
				* Math.abs(wave.getTargetVelocity()) / 8.0;
		dataPointCoordinates[7] = _weights[7] * Math.min(1.0, movementWave.getTicksSinceVelocityChange() / movementWave.getBulletTravelTime());
		dataPointCoordinates[8] = _weights[8] * movementWave.getRobotDistLast10Ticks() * 0.0125;

		return dataPointCoordinates;
	}
}


public class AntiSimpleTargeterWeightingScheme extends
		ClassificationWeightingScheme {
	
	public AntiSimpleTargeterWeightingScheme() {
		_weights = new double[] { 1.0, 1.0 };
	}

	@Override
	public double[] getPointCoordinates(DataWave wave) {
		MovementDataWave movementWave = (MovementDataWave) wave;

		double[] dataPointCoordinates = new double[_weights.length];
		dataPointCoordinates[0] = _weights[0] * movementWave.getBulletTravelTime()
				* 0.009166667;
		dataPointCoordinates[1] = _weights[1] * Math.abs(movementWave.getTargetVelocity() * Math.sin(movementWave.getRelHeading()))
				/ 8.0;

		
		return dataPointCoordinates;
	}
}

public class AntiSimpleTargeter2WeightingScheme extends
		ClassificationWeightingScheme {
	
	public AntiSimpleTargeter2WeightingScheme() {
		_weights = new double[] { 1.0, 1.0, 1.0};
	}

	@Override
	public double[] getPointCoordinates(DataWave wave) {
		MovementDataWave movementWave = (MovementDataWave) wave;

		double[] dataPointCoordinates = new double[_weights.length];
		dataPointCoordinates[0] = _weights[0] * movementWave.getBulletTravelTime()
				* 0.009166667;
		dataPointCoordinates[1] = _weights[1] * Math.abs(movementWave.getTargetVelocity())
				/ 8.0;
		dataPointCoordinates[2] = _weights[2] * (Math.cos(movementWave.getRelHeading()) + 1) / 2;

		
		return dataPointCoordinates;
	}
}

public class AntiSimpleTargeter3WeightingScheme extends
		ClassificationWeightingScheme {
	
	public AntiSimpleTargeter3WeightingScheme() {
		_weights = new double[] { 1.0, 1.0, 1.0, 1.0, 1.0};
	}

	@Override
	public double[] getPointCoordinates(DataWave wave) {
		MovementDataWave movementWave = (MovementDataWave) wave;

		double[] dataPointCoordinates = new double[_weights.length];
		dataPointCoordinates[0] = _weights[0] * movementWave.getBulletTravelTime()
				* 0.009166667;
		dataPointCoordinates[1] = _weights[1]
				* Math.sin(movementWave.getRelHeading());
		dataPointCoordinates[2] = _weights[2]
				* (Math.cos(movementWave.getRelHeading()) + 1) / 2;
		dataPointCoordinates[3] = _weights[3] * Math.abs(movementWave.getTargetVelocity())
				/ 8.0;
		dataPointCoordinates[4] = _weights[4] * (wave.getVChange() + Rules.DECELERATION)
				/ (Rules.ACCELERATION + Rules.DECELERATION);

		
		return dataPointCoordinates;
	}
}


AW 16:16, 31 March 2013 (UTC)

Contents

Thread titleRepliesLast modified
GoTo surfing.1703:27, 12 April 2012
First page
First page
Next page
Next page
Last page
Last page

GoTo surfing.

So, I have GoTo surfing of two waves working, but it is skipping a lot of turns. I haven't done any detailed tests but it seems like the battle speed is fairly fast for a bot in the top 10 so I am assuming that the turns are only being skipped because all of the calculations are done in one turn. Any ideas on how to improve performance? (A break down of the time spent on various things each turn is now printed at the end of each round.)

AW04:30, 8 March 2012

I did several things to get my goto working at decent speed:

  1. Cache the danger value at each point on the second wave. That way, if you are able to reach the exact point before the wave hits, you don't need to look up the danger again.
  2. Before calculating your second wave dangers, get all your first wave dangers. Sort each of these by danger, lowest to highest. That way, you can immediately break the moment your first wave danger exceeds your minimum first+second wave danger, because from then on it is impossible to get less than that.
  3. Optimize your precise prediction as much as you can, then optimize it some more. Use FastTrig. Avoid divides. Cache values for cos/sin if your heading won't be changing any more. Compare squared distances instead of computing square roots.
  4. Try to make your danger calculation as fast as possible. It gets called a LOT. If possible, stick everything in a 1D array so you just check the danger at that GF (or across a range) as you need it.

There's probably more, but I think these were the big ones.

Skilgannon12:59, 8 March 2012

1 and 2 looks a lot like transposition tables and pruning techniques used with success in chess engines. I was trying to find something similar in robocode for months, but failed.

You can cache square roots in 3 and only need to recalculate them when wave log is updated.

MN15:19, 8 March 2012
 

I will try more optimizations later today but one big one that I do use is only calculating changes in the movement path for the final calculation. So I make and estimate path and then I use the points along that path for my real paths until I need to move to a different point (Which is at most four indices prior to the destination point.) Skipped turns only seem to be a problem in the first round now (less than 5 turns skipped for the other rounds) here is a how time is spent in the first round against Raiko:

Wave Danger Time = 24.566979999999997 (time spent getting points from the KdTrees for the waves) OnScannedRobot Time = 197.411769 (time spent in OnScannedRobot for the movement) Wave Management Time = 39.465934 (time managing movement waves (this includes precise intersection for virtual waves every turn)) Move Calc time = 664.440483 (times spent calculating my movement path, includes the next four items) Path generation time = 230.93243999999999 (precise predictor path generation) First wave angles time = 266.566414 (precise interesection on the first wave) Check Danger time = 111.454803 (danger for the first and second wave, includes the time from the next item) Second wave danger time = 108.967914 (precise bot width and danger for the second wave)

Time spent on movement = 906.613341 Time spent on gun = 239.867495

One more thing. I use DC surfing so I don't actually spend that much time calculating the danger, less than 3 milliseconds for the first waves for the whole first round. And finally I do limit myself to the 5 best movement options from the first wave.

AW16:59, 8 March 2012
 

So, I have tried all of the ideas above, but am still skipping turns. Obviously I can still work on 3 and 4, but I had a question for the second wave points optimization: is a hashmap fast enough or should I try to make a custom class to handle that? Also, I ran Gilgalad 1.8 on Windows (yes, I admit it, I use Windows sometimes) to show some other people and noticed that it didn't skip any turns!?! Further tests show that it will occasionally skip turns but not nearly as much as it does on Linux (Fedora 16). This was all on the same computer but with slightly different versions of robocode (both 1.7.X however). Any ideas as to why it skips more turns on Linux?

AW00:50, 13 March 2012

My number one guess is... perhaps you are using a 64-bit OS in one case and 32-bit in the other case?That can have a significant impact on performance with pointers (which the JVM relies on much more heavily than typical compiled code)

Rednaxela01:49, 13 March 2012

Yes, the Linux installation is 64-bit but the windows is 32 bit. Thanks!

AW18:50, 13 March 2012
 

Maybe you have different CPU constants?

I spent about a month optimizing Tomcat and may give only one advice - accurate instrumenting for spent time measurements. Not only avg, but also min & max times. Only after i do it, i findout Tomcat's hotspots.

Jdev06:31, 13 March 2012
 

Didn´t try on Linux yet (yep, another Windows user). But in Windows 7, CPU affinity is having an impact on skipped turns. Which means different time sharing algorithms are leading to different amount of skipped turns.

MN16:34, 13 March 2012
 

Point 2 - I just use an arraylist for first wave points and another for second wave points. Put the first through Collections.sort() then iterate through. At each point calculate the min reachable second wave danger. However, if the first wave danger is more than the min first+second then break. I'm not sure why a hashmap would even help here?

Have you checked that your JVM is the same version for both? And CPU constant? 64 bit might be better at the raw throughput for the benchmark but about the same for stuff with lots of decisions in it, like a bot.

Skilgannon20:13, 13 March 2012

Actually 64-bit would probably be worse in some things too. Specifically, 64-bit mode means pointers take twice as much memory, thus effectively doubling memory bandwidth requirements for code that spends most of it's cpu time dealing with pointers between objects. For the case of robocode bots, deep recursive data structures are not too uncommon, and additionally Java is more subject to this problem than compiled object oriented languages, because Java doesn't allow non-reference composition of classes.

Rednaxela22:41, 13 March 2012
 

I meant would there be a faster way to cache the dangers of points on the secondary wave. For the first wave paths, I use an array and a quicksort that takes an array of dangers and returns an array of the order of the dangers, which was the fastest way I could think of doing this. I don't know much about hashmaps so I was wondering if there would be a faster way to store about 400 points and thenk check if any of them were identical to another point, on thinking it over, the points would probably be added in increasing or decreasing order of X and Y so I may be able to keep an arraylist sorted without much overhead.

AW16:37, 14 March 2012
 

Hmm. What I do is pre-calculate a whole bunch of points by precise predicting until a few ticks after the second wave has hit. Then I use these points as 'destinations' for my goto algorithm. Each point holds its own danger value, and this danger is the one used if the point is 'reachable', otherwise I calculate a new danger for whatever location the wave hits me at. That way I never run into problems with duplicate points, association etc.

Skilgannon17:27, 14 March 2012
 
 

What are the units of your time measurements?

Skotty16:10, 13 March 2012

The time measurements are in milliseconds, but I used System.nanoTime() so they should be fairly accurate. I have a few more ideas for optimizations that would make a large improvement, but hopefully I can get back to adding new features rather than making old ones run faster. Since I have the week off, maybe, just maybe, I can take the throne before I turn 18...

AW18:54, 13 March 2012
 

Well, I have spent untold hours optimizing and am still skipping turns on my 64-bit JVM but now I have two ideas. In the first round, Gilgalad is about 2 times slower than it is in the last round. Since it is processing more values in the last round, the difference should be due to the optimizer. So I tried using proguard to optimize Gilgalad before the start of a battle. There may have been some small improvement, but it isn't really noticeable. I don't know how the Java optimizer works, but is it possible that the CPU time spent on the optimizer is counted as time spent on the robot. This is the only explanation I can think of for why the speed is so much better later in the round but optimizing at compile time doesn't help. (Well, other than the "AW is making another of his stupid mistakes." explanation.) So my first idea was to try proguard. My next idea is to warm up the JVM with some pointless calculations for the first 30 turns of a battle, where I don't need to do anything CPU intensive, so I could surf some fake wave and not actually move on the fake path or something. However, I don't like the idea of making Gilgalad waste time doing nothing to make it appear faster.

AW02:22, 11 April 2012

The JIT compiler (optimizer) counts how many times each piece of code is executed and compiles them after a threshold is achieved (1500 executions in client mode, 10000 in server mode). Compilation is done in a background thread, and that piece of code stays in interpreted mode until the background compilation finishes. So, the optimization runs in background and should not interfere unless you limit the JVM to a single core. In client mode, optimization finishes faster and the JVM consumes less memory, but the resulting compiled code is slower. Server mode is the opposite.

You can change the threshold with the -XX:CompileThreshold=10000 option, change the optimizer to run in foreground with the -Xbatch option, choose client mode with the -client option, and server mode with the -server option. Experimenting with different settings may help understand what is going on.

Maybe it is skipping turns while it is still running in interpreted mode (beginning of first round), after the bot history grows a bit. After the code switches to compiled mode, it becomes faster and stops skipping turns.

Warming the JVM with pointless calculations probably won´t work since the JIT compiler will compile only that piece of pointless code. And sometimes, warming up confuses the optimizer and it activates the wrong set of optimizations, resulting in slower compiled code.

MN03:27, 12 April 2012
 

It might have something to do with your java settings - maybe try using the client VM instead of the server one (which is default on Linux IIRC).

Otherwise, try giving DrussGT a run and see how many turns get skipped. It usually skips around 2/match on my laptop.

Skilgannon12:25, 11 April 2012
 
First page
First page
Next page
Next page
Last page
Last page