Difference between revisions of "RetroGirl/Gun"

From Robowiki
Jump to navigation Jump to search
m (move to category=Targeting Implementations)
m
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
{{Navbox small
 +
| title        = Sub-pages
 +
| parent      = RetroGirl
 +
| page1        = Gun
 +
}}
 
== Background ==
 
== Background ==
  
I've been enjoying tinkering with [[User:Voidious/WaveSim|WaveSim]] and genetic algorithms and have been trying to think of interesting ways to apply them.  I got to thinking that a [[:Category:Perceptual Bots|perceptual bot]] might be a good thing to try to evolve with GA.  Eventually I focused on the perceptual gun.  I can evaluate gun fitness orders of magnitude faster with WaveSim than other bot aspects with Robocode battles.  Also I'm in kind of a gun nut phase.
+
This is the [[:Category:Perceptual Bots|perceptual]] gun used in [[RetroGirl]]. I'd been tinkering with genetic algorithms and [[User:Voidious/WaveSim|WaveSim]] a lot and was trying to think of interesting ways to apply them.
  
It's not hard to imagine there could be movement tendencies shared by most bots, so I was pretty sure I could do better than [[Head-On Targeting]], [[Linear Targeting]], or [[Circular Targeting]].  Coming up with a way to decode gun logic from a bit string was the challenging part.  As part of that, you have to make sure the translation between bits and gun logic is one that allows crossover and mutation to retain traits - otherwise, the GA wouldn't be able to evolve and improve the fitness.
+
It's not hard to imagine there could be movement tendencies shared by most bots, so I was pretty sure I could do better than [[Head-On Targeting]], [[Linear Targeting]], or [[Circular Targeting]].  Coming up with a way to decode gun logic from a bit string is the challenging part - you have to make sure the translation between bits and gun logic is one that allows crossover and mutation to retain traits, or GA wouldn't be able to evolve and improve the fitness.
  
I decided to try and use [[wikipedia:Decision tree|decision trees]].
+
I'm also interested in using it in [[Diamond]] for the first few shots, when it can outperform my KNN gun. The initial versions I tried in Diamond saw no performance increase, but I think I can do better.
  
 
== How it works ==
 
== How it works ==
  
=== Using the tree to aim ===
+
The current version is basically a pre-loaded, non-learning KNN ([[Dynamic Clustering]]) gun. The attributes encoded in the bit string are:
 +
* Weights for each dimension.
 +
* A few hundred data points in a few dimensions, each with an associated [[GuessFactor]].
  
The tree itself is really simple.  Each node specifies either an output value ([[GuessFactor]]), or an attribute/value to split at.  If the root is "lateral velocity"/5.5, then all situations with lateral velocity less than 5.5 move to the left sub-tree, the rest move to the right.  This is repeated at each node until a node is reached that contains an output value.  So to decide on a firing angle, we just move through the tree with the current situation's attributes, then translate the output value (GuessFactor) to a firing angle.
+
At fire time, just look up the one nearest neighbor and shoot at the corresponding GuessFactor. For the gun in [[RetroGirl]], I stick to attributes that don't require storing state, but for [[Diamond]], I'm free to use things like acceleration, time since velocity change, etc. You can check RetroGirl's code for the huge hex string and the logic to decode it into a kd-tree. =)
  
=== Decoding/building the tree ===
+
== Performance ==
  
Interpreting a string of bits as a decision tree is really the interesting part.  Here's how I do it. 
+
Here are some hit rates against a test bed of ~500 bots over 4 seasons (tested with [[WaveSim]]):
  
First off, my decision tree is a filled out binary tree, so total nodes is <math>2^{height}-1</math>.  The leaf nodes are output values and all the other nodes are splitting nodes.
+
* Head-On Targeting: 14.93%
 +
* Linear Targeting: 16.05%
 +
* Circular Targeting: 16.16%
 +
* Best perceptual KNN gun: 17.4%
 +
* Diamond's KNN gun: 23.21%
  
The splitting nodes are each two bytes:  splitting attribute and value.  The leaf nodes are one byte:  output value (GuessFactor).  The first two bytes are the root, the next two are its left child, then its right child, then root.left.left, root.left.right, root.right.left... just like you'd lay out an array-based [[wikipedia:Binary heap|binary heap]].  Right now I ignore the first bit of each byte, so it's an unsigned number from 0 to 127.
+
I've begun working with a test bed of 1-round battles, which should be more appropriate for how I want to use it in [[Diamond]]. This is a little more interesting - hit rates against ~500 bots over ~33 seasons (~16,000 battles):
  
I decide what attributes I'm going to use and create a 2D array, <code>rootCube[num_attributes][2]</code>, specifying the min and max possible values for each attribute.  When we query the tree, we pass a 1D array of attribute values, corresponding to these same attribute indices.  So the splitting attribute byte value would be an index to the first dimension of those arrays.  
+
* Best perceptual KNN gun so far: 21.45%
 +
* Diamond's KNN gun: 22.02%
 +
* Perceptual KNN for first 4 shots, then Diamond's gun: 22.69%
  
As I build the tree from the bit string, I store with each node the hypercube of attribute value ranges that could be input to this node.  For the root node, any value within <code>rootCube</code> can come in, so I just make a copy of it.  For every other node, I copy the parent's hypercube, then limit it based on the parent's splitting attribute/value.
+
I expected the gap to be closer with less time for Diamond's gun to learn, but I really can't believe how highly the perceptual gun scores here. Maybe all bots move more similarly in the first round than in the rest of the battle?
 
 
But the byte for the split value is 0-127 - how does that work?  When I set a node's split value, I consider the byte value as a fraction of the range (from the hypercube) for the splitting attribute.  So if the lateral velocity range is [3, 6] and the byte for the splitting value is 42, I split at <math>3 + (((1 + 42) / 128) * (6 - 3)) = ~4</math>.  If we mutate and the range becomes [0, 1], now we'll split at 0.33. If we didn't do it this way, you could end up with huge portions of a tree being unreachable - e.g., if you split at 4, then split at 3 on the right sub-tree for the same attribute. This setup ensures a coherent decision tree no matter what, and hopefully makes sense as far as passing on traits even if the split value can change.
 
 
 
So I create each node as defined by the byte values in the bit string and the hypercubes, connect nodes to their parents, and voila - gun logic!
 
 
 
== Notes ==
 
 
 
My current test data set is 2 seasons against 500 bots covering the full range of the RoboRumble population.  Head-On targeting gets 14.91%, my best tree so far gets 17.23%, [[Diamond]]'s main gun gets about 23.5%.  Right now my best tree is 1023 nodes (height=9 splits), with lateral velocity, bullet flight time, relative heading, wall distance, and reverse wall distance.  I'm still playing with settings quite a bit.
 
 
 
I'll probably release my perceptual bot soon.  As is usually the case, it turns out movement matters at least as much as gun =) ... I don't want to release it until I've got a nicely tuned movement and I can crush all existing perceptual bots.  In the meantime, I may try this gun in Diamond for those opening ticks when I have little to no data.  Right now I use HOT when I have no data, and this might even outperform my Main Gun for a fraction of the first round.
 
  
 
__NOTOC__
 
__NOTOC__
  
 
[[Category:Targeting Implementations]]
 
[[Category:Targeting Implementations]]

Latest revision as of 20:04, 11 July 2012

Sub-pages:
RetroGirlGun

Background

This is the perceptual gun used in RetroGirl. I'd been tinkering with genetic algorithms and WaveSim a lot and was trying to think of interesting ways to apply them.

It's not hard to imagine there could be movement tendencies shared by most bots, so I was pretty sure I could do better than Head-On Targeting, Linear Targeting, or Circular Targeting.  Coming up with a way to decode gun logic from a bit string is the challenging part - you have to make sure the translation between bits and gun logic is one that allows crossover and mutation to retain traits, or GA wouldn't be able to evolve and improve the fitness.

I'm also interested in using it in Diamond for the first few shots, when it can outperform my KNN gun. The initial versions I tried in Diamond saw no performance increase, but I think I can do better.

How it works

The current version is basically a pre-loaded, non-learning KNN (Dynamic Clustering) gun. The attributes encoded in the bit string are:

  • Weights for each dimension.
  • A few hundred data points in a few dimensions, each with an associated GuessFactor.

At fire time, just look up the one nearest neighbor and shoot at the corresponding GuessFactor. For the gun in RetroGirl, I stick to attributes that don't require storing state, but for Diamond, I'm free to use things like acceleration, time since velocity change, etc. You can check RetroGirl's code for the huge hex string and the logic to decode it into a kd-tree. =)

Performance

Here are some hit rates against a test bed of ~500 bots over 4 seasons (tested with WaveSim):

  • Head-On Targeting: 14.93%
  • Linear Targeting: 16.05%
  • Circular Targeting: 16.16%
  • Best perceptual KNN gun: 17.4%
  • Diamond's KNN gun: 23.21%

I've begun working with a test bed of 1-round battles, which should be more appropriate for how I want to use it in Diamond. This is a little more interesting - hit rates against ~500 bots over ~33 seasons (~16,000 battles):

  • Best perceptual KNN gun so far: 21.45%
  • Diamond's KNN gun: 22.02%
  • Perceptual KNN for first 4 shots, then Diamond's gun: 22.69%

I expected the gap to be closer with less time for Diamond's gun to learn, but I really can't believe how highly the perceptual gun scores here. Maybe all bots move more similarly in the first round than in the rest of the battle?