User:Voidious/TripHammer

From Robowiki
< User:Voidious
Revision as of 21:15, 5 January 2011 by Voidious (talk | contribs) (→‎The algorithm: update note about KNN when not enough data)
Jump to navigation Jump to search

Background

This began as a new gun using k-means clustering. I knew ABC had tried and abandoned it once upon a time, so I found myself wondering what kind of concessions you would have to make to get k-means clustering working in Robocode's CPU constraints. At first glance, k-means clustering indeed seems a lot more CPU intensive than k-nearest neighbors. But as I considered how I might implement it, I realized that it was not only viable, it could even be pretty fast.

TripHammer became a place for me to experiment with different forms of clustering. I only posted about k-means and KNN variants, but I put significant time into a couple other clustering algorithms, too. So far, I haven't found one that improves on KNN (regular "Dynamic Clustering").

TripHammer k-means: How it works

The algorithm

  • The first k points added are used as the initial locations of the means.
  • When a new point comes in, assign it to the nearest cluster and update the mean.
  • After the new point is added, go through a few old points and reclassify them. If a point should be in a different cluster, move it over and update the means of both clusters.
  • Keep a circular linked list of all points to keep track of which point should next be reclassified, so it's always reclassifying the point that was least recently examined.
  • To aim, just find the nearest cluster, then do some kernel density on the points in that cluster to decide on a firing angle (like any old DC gun would).
  • If there's not much data in the nearest cluster, fall back to KNN.
    • Finding the optimal threshold here is probably not trivial, and could be a key to making any magic happen with clustering + KNN.
    • Currently, I have cluster size=min(250, data points / 30) for KNN -- so it scales up to 250 at about round 10 -- then use the k-means cluster instead if it's bigger.

Technical notes

  • Regarding the initial means: I first tried k random points (common in k-means algorithms), and I've since tried slightly randomizing the first k points that are added, but just using the first k points has worked best.
  • When adding/removing points to/from clusters, updating the cluster mean is easy with one weighted average calculation, so that's really fast.
  • Right now, finding the nearest cluster is brute force, so this is one of the slower parts of the code. Not that it's all that slow to do brute force KNN on such a small data set, but it also runs many times per tick. There are methods to speed this up in k-means algorithms, such as the filtering method detailed here, but I don't think that method would work in my k-means algorithm and I haven't come up with any of my own methods yet.
  • Kernel density is the other slow part of the code, since these clusters can have thousands of points. Some things I do to speed it up:
    • I test fixed, evenly spaced angles (like Ali) instead of every angle in the cluster (like Diamond).
    • I use Quartic instead of Gaussian kernel density (see here), so I'm not doing costly Math.exp operations all the time.
    • I may play with reducing the number of points used in the kernel density (e.g., with nearest neighbors, or using randomly chosen points). But I like leveraging the huge amounts of well classified data, and so far it's still fast enough for 35 rounds.
  • Current settings:

Of course, this is a bit different than Lloyd's algorithm, the most common k-means algorithm. But it continually recenters the clusters and reclassifies data points much faster than it adds them, so the clusters should still converge nicely.

Thoughts / Plans

I still think it's worth exploring other forms of clustering. In targeting, there are parts of the data space that are very dense. A more sophisticated clustering system would recognize those parts and let you leverage all of that very relevant data (the power of SuperNodes). KNN is likely to ignore huge swaths of it, even with large and scaling cluster sizes. In the rest of the data space or before you have enough data, you can fall back on KNN, as we do today.

I've developed a test harness at User:Voidious/TripHammer/Research which allows for collecting wave-based targeting data via a Robocode bot, then running classification algorithms against the raw data outside of Robocode. This allows for much faster and more consistent testing of targeting algorithms.

(PS - the name is another cool superhero from Powers, since it's part of my Diamond code base.)