Difference between revisions of "User:Voidious/TripHammer"

From Robowiki
Jump to navigation Jump to search
(details about TripHammer, new k-means clustering gun)
 
(heavily revised / updated)
Line 1: Line 1:
 
== Background ==
 
== Background ==
This is a new gun I'm working on that uses [[wikipedia:k-means clustering|k-means clustering]]. I've had the itch to work on a new gun for a while now… Common machine learning algorithms that haven't been explored in Robocode seem like a good place to start, so I contemplate those from time to time.
+
This is a new gun I'm working on that uses [[wikipedia:k-means clustering|k-means clustering]]. I knew [[User:ABC|ABC]] had tried and abandoned it once upon a time, so I found myself wondering what kind of concessions you would have to make to get k-means clustering working in Robocode's CPU constraints. At first glance, k-means clustering indeed seems a lot more CPU intensive than [[wikipedia:k-nearest neighbor algorithm|k-nearest neighbors]]. But as I considered how I might implement it, I realized that it was not only viable, it could even be pretty fast.
 
 
I was wondering what kind of concessions you would have to make to get k-means clustering working in Robocode's CPU constraints. At first glance, k-means clustering indeed seems a lot more CPU intensive than [[wikipedia:k-nearest neighbor algorithm|k-nearest neighbors]]. But as I thought through how I might implement it, I realized that it was not only viable, it could even be pretty fast.  
 
  
 
== How it works ==
 
== How it works ==
  
 
=== The algorithm ===
 
=== The algorithm ===
* The first k points that come in are used as the initial locations of the means. I initially tried k random points, which is common in k-means clustering algorithms, but this worked better. (Note that these points are not tied to the clusters after that.)  
+
* The first k points added are used as the initial locations of the means. (Note that these points are ''not'' permanently tied to the clusters beyond just being the first points in them.)
* When a new point comes in, assign it to the nearest cluster and update the mean.  
+
* When a new point comes in, assign it to the nearest cluster and update the mean.
* After a new point is added, go through a few old points and reclassify them. If a point should be in a different cluster, move it over and update the means of both clusters.
+
* After the new point is added, go through a few old points and reclassify them. If a point should be in a different cluster, move it over and update the means of both clusters.
* I keep a circular linked list of all points for tracking which points should next be reclassified. New points are added at the "end" (right before the current list pointer).
+
* I keep a circular linked list of all points to keep track of which point should next be reclassified. New points are added at the "end" (right before the current list pointer), so it's always reclassifying the point that was least recently examined.
* To aim, just find the nearest cluster, then do some kernel density on the points in that cluster. This is one of the slower parts of the code, since these clusters can have thousands of points. Two things I do to speed it up:
+
* To aim, just find the nearest cluster, then do some kernel density on the points in that cluster to decide on a firing angle.
 +
** If there's not much data in the nearest cluster, add in points from one or more nearby clusters until we have "enough". This should make it behave similarly to k-nearest neighbors in the early rounds and in rare circumstances with very few points. (Simply using KNN in these situations is another option I'm considering.)
 +
 
 +
=== Technical notes ===
 +
* Regarding the initial means: I first tried k random points (common in k-means algorithms), and I've since tried slightly randomizing the first k points that are added, but just using the first k points has worked best.
 +
* When adding/removing points to/from clusters, updating the cluster mean is easy with one weighted average calculation, so that's really fast.
 +
* Right now, finding the nearest cluster is brute force, so this is one of the slower parts of the code. Not that it's all that slow to do brute force KNN on such a small data set, but it also runs many times per tick. There are methods to speed this up in k-means algorithms, such as the filtering method detailed [http://www.cs.umd.edu/~mount/Papers/pami02.pdf here], but I don't think that method would work in my k-means algorithm and I haven't come up with any of my own methods yet.
 +
* Kernel density is the other slow part of the code, since these clusters can have thousands of points. Some things I do to speed it up:
 
** I test fixed, evenly spaced angles (like [[Ali]]) instead of every angle in the cluster (like [[Diamond]]).
 
** I test fixed, evenly spaced angles (like [[Ali]]) instead of every angle in the cluster (like [[Diamond]]).
 
** I use Quartic instead of Gaussian kernel density (see [http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MISHRA/kde.html here]), so I'm not doing costly Math.exp operations all the time.
 
** I use Quartic instead of Gaussian kernel density (see [http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MISHRA/kde.html here]), so I'm not doing costly Math.exp operations all the time.
 +
** I may play with reducing the number of points used in the kernel density (e.g., with nearest neighbors, or using randomly chosen points). But I like leveraging the huge amounts of well classified data, and so far it's still fast enough for 35 rounds.
 +
* Current settings:
 +
** 250 clusters.
 +
** Reclassifying 20 points each tick.
 +
** The scan distancing function is almost exactly the same one as [[Diamond]]'s main gun, except I'm using [[wikipedia:Taxicab geometry|Manhattan distance]].
 +
** I'm using precise [[GuessFactor|GuessFactors]] (as in [[Maximum Escape Angle|precise MEA]], like [[Dookious]]), not [[Displacement Targeting|displacement vectors]] (like [[Diamond]]).
  
=== Technical notes ===
+
Of course, this is a bit different than [[wikipedia:Lloyd's algorithm|Lloyd's algorithm]], the most common k-means algorithm. But it continually recenters the clusters and reclassifies data points much faster than it adds them, so the clusters should still converge nicely.
* When adding/removing points to/from clusters, updating the cluster mean is easy with one weighted average calculation, so that's really fast.
 
* Right now, finding the nearest cluster is brute force, so this is the other slow part of the code (besides kernel density). There are methods to speed this up in k-means algorithms, such as the filtering method detailed [http://www.cs.umd.edu/~mount/Papers/pami02.pdf here]. (I don't think that exact method will help with my algorithm, though.)
 
* Since these clusters can get ''really'' big, I may play with reducing the number of points used in the kernel density (e.g., with nearest neighbors, or using randomly chosen points). But I like leveraging the huge amounts of well classified data, and so far it's still fast enough for 35 rounds, so I've left it for now.
 
* Right now, I'm using 250 clusters and reclassifying 20 points each tick. The scan distancing function is almost exactly the same one as [[Diamond]]'s main gun. I'm using precise [[GuessFactor|GuessFactors]], not [[Displacement Targeting|displacement vectors]].
 
  
Of course, this is a bit different than [[wikipedia:Lloyd's algorithm|Lloyd's algorithm]], the most common k-means algorithm. But it continually recenters the clusters and reclassifies data points much faster than it adds them, so the clusters should still converge nicely.
+
== Thoughts / Plans ==
  
== Results / Plans ==
+
The [[Targeting Challenge RM/Results|TCRM results]] are far better than any other gun I've made. I used it in [[Diamond]] 1.39 in place of the usual KNN Main Gun, and it performed about even: [http://darkcanuck.net/rumble/RatingsCompare?game=roborumble&name=voidious.Diamond%201.39&vs=voidious.Diamond%201.372 1.39 vs 1.372]. It's encouraging to see such strong results, but this isn't enough for me to keep it in Diamond. If I find future improvements, I will try it again.
  
Based on my results so far (see [[Targeting Challenge RM/Results|TCRM results]]), I think there's definite potential here. It is already (within 24 hours) performing on par with [[Dookious]]' and [[Diamond]]'s main guns against random movers, which is what I see as this gun's strength. I'm not sure I'll even pursue [[Anti-Surfer Targeting|Anti-Surfer]] tweaks to this gun.  
+
I originally thought this was only going to be useful against random movers, but there may be some potential as an [[Anti-Surfer Targeting|Anti-Surfer]] gun, as well. Limiting the size of the clusters and cycling out old points seems like it would behave a lot like a low rolling depth in [[Visit Count Stats]] buffers. I've gotten some respectable [[Anti-Surfer Challenge]] scores already (better than Diamond's current AS gun, even).
  
I used Diamond's code as a base, which made development pretty easy. It's actually a rather simple design. I could integrate it into Diamond's [[Virtual Guns]] pretty easily, as well, but I'm hesitant because I don't think it's fast enough for [[melee]]. I'm really not sure if this will go into Diamond, become its own bot, or just stand alone as a fun gun experiment.
+
I used Diamond's code as a base, which made development pretty easy. It's actually a rather simple design. This also meant I could easily integrate it into Diamond's Virtual Guns array. I don't think it's fast enough for [[melee]], and I don't think [[Wave Surfing]] has enough data to make it useful there, so for now it's just a 1v1 gun experiment.
  
(The name is another cool superhero from [[wikipedia:Powers (comics)|Powers]], since it's part of my Diamond code base.)
+
(PS - the name is another cool superhero from [[wikipedia:Powers (comics)|Powers]], since it's part of my Diamond code base.)
  
 
[[Category:Targeting Implementations]]
 
[[Category:Targeting Implementations]]
  
 
__NOTOC__
 
__NOTOC__

Revision as of 16:38, 19 September 2009

Background

This is a new gun I'm working on that uses k-means clustering. I knew ABC had tried and abandoned it once upon a time, so I found myself wondering what kind of concessions you would have to make to get k-means clustering working in Robocode's CPU constraints. At first glance, k-means clustering indeed seems a lot more CPU intensive than k-nearest neighbors. But as I considered how I might implement it, I realized that it was not only viable, it could even be pretty fast.

How it works

The algorithm

  • The first k points added are used as the initial locations of the means. (Note that these points are not permanently tied to the clusters beyond just being the first points in them.)
  • When a new point comes in, assign it to the nearest cluster and update the mean.
  • After the new point is added, go through a few old points and reclassify them. If a point should be in a different cluster, move it over and update the means of both clusters.
  • I keep a circular linked list of all points to keep track of which point should next be reclassified. New points are added at the "end" (right before the current list pointer), so it's always reclassifying the point that was least recently examined.
  • To aim, just find the nearest cluster, then do some kernel density on the points in that cluster to decide on a firing angle.
    • If there's not much data in the nearest cluster, add in points from one or more nearby clusters until we have "enough". This should make it behave similarly to k-nearest neighbors in the early rounds and in rare circumstances with very few points. (Simply using KNN in these situations is another option I'm considering.)

Technical notes

  • Regarding the initial means: I first tried k random points (common in k-means algorithms), and I've since tried slightly randomizing the first k points that are added, but just using the first k points has worked best.
  • When adding/removing points to/from clusters, updating the cluster mean is easy with one weighted average calculation, so that's really fast.
  • Right now, finding the nearest cluster is brute force, so this is one of the slower parts of the code. Not that it's all that slow to do brute force KNN on such a small data set, but it also runs many times per tick. There are methods to speed this up in k-means algorithms, such as the filtering method detailed here, but I don't think that method would work in my k-means algorithm and I haven't come up with any of my own methods yet.
  • Kernel density is the other slow part of the code, since these clusters can have thousands of points. Some things I do to speed it up:
    • I test fixed, evenly spaced angles (like Ali) instead of every angle in the cluster (like Diamond).
    • I use Quartic instead of Gaussian kernel density (see here), so I'm not doing costly Math.exp operations all the time.
    • I may play with reducing the number of points used in the kernel density (e.g., with nearest neighbors, or using randomly chosen points). But I like leveraging the huge amounts of well classified data, and so far it's still fast enough for 35 rounds.
  • Current settings:

Of course, this is a bit different than Lloyd's algorithm, the most common k-means algorithm. But it continually recenters the clusters and reclassifies data points much faster than it adds them, so the clusters should still converge nicely.

Thoughts / Plans

The TCRM results are far better than any other gun I've made. I used it in Diamond 1.39 in place of the usual KNN Main Gun, and it performed about even: 1.39 vs 1.372. It's encouraging to see such strong results, but this isn't enough for me to keep it in Diamond. If I find future improvements, I will try it again.

I originally thought this was only going to be useful against random movers, but there may be some potential as an Anti-Surfer gun, as well. Limiting the size of the clusters and cycling out old points seems like it would behave a lot like a low rolling depth in Visit Count Stats buffers. I've gotten some respectable Anti-Surfer Challenge scores already (better than Diamond's current AS gun, even).

I used Diamond's code as a base, which made development pretty easy. It's actually a rather simple design. This also meant I could easily integrate it into Diamond's Virtual Guns array. I don't think it's fast enough for melee, and I don't think Wave Surfing has enough data to make it useful there, so for now it's just a 1v1 gun experiment.

(PS - the name is another cool superhero from Powers, since it's part of my Diamond code base.)