User:Voidious/TripHammer
Background
This is a new gun I'm working on that uses k-means clustering. I've had the itch to work on a new gun for a while now… Common machine learning algorithms that haven't been explored in Robocode seem like a good place to start, so I contemplate those from time to time.
I was wondering what kind of concessions you would have to make to get k-means clustering working in Robocode's CPU constraints. At first glance, k-means clustering indeed seems a lot more CPU intensive than k-nearest neighbors. But as I thought through how I might implement it, I realized that it was not only viable, it could even be pretty fast.
How it works
The algorithm
- The first k points that come in are used as the initial locations of the means. I initially tried k random points, which is common in k-means clustering algorithms, but this worked better. (Note that these points are not tied to the clusters after that.)
- When a new point comes in, assign it to the nearest cluster and update the mean.
- After a new point is added, go through a few old points and reclassify them. If a point should be in a different cluster, move it over and update the means of both clusters.
- I keep a circular linked list of all points for tracking which points should next be reclassified. New points are added at the "end" (right before the current list pointer).
- To aim, just find the nearest cluster, then do some kernel density on the points in that cluster. This is one of the slower parts of the code, since these clusters can have thousands of points. Two things I do to speed it up:
Technical notes
- When adding/removing points to/from clusters, updating the cluster mean is easy with one weighted average calculation, so that's really fast.
- Right now, finding the nearest cluster is brute force, so this is the other slow part of the code (besides kernel density). There are methods to speed this up in k-means algorithms, such as the filtering method detailed here. (I don't think that exact method will help with my algorithm, though.)
- Since these clusters can get really big, I may play with reducing the number of points used in the kernel density (e.g., with nearest neighbors, or using randomly chosen points). But I like leveraging the huge amounts of well classified data, and so far it's still fast enough for 35 rounds, so I've left it for now.
- Right now, I'm using 250 clusters and reclassifying 20 points each tick. The scan distancing function is almost exactly the same one as Diamond's main gun. I'm using precise GuessFactors, not displacement vectors.
Of course, this is a bit different than Lloyd's algorithm, the most common k-means algorithm. But it continually recenters the clusters and reclassifies data points much faster than it adds them, so the clusters should still converge nicely.
Results / Plans
Based on my results so far (see TCRM results), I think there's definite potential here. It is already (within 24 hours) performing on par with Dookious' and Diamond's main guns against random movers, which is what I see as this gun's strength. I'm not sure I'll even pursue Anti-Surfer tweaks to this gun.
I used Diamond's code as a base, which made development pretty easy. It's actually a rather simple design. I could integrate it into Diamond's Virtual Guns pretty easily, as well, but I'm hesitant because I don't think it's fast enough for melee. I'm really not sure if this will go into Diamond, become its own bot, or just stand alone as a fun gun experiment.
(The name is another cool superhero from Powers, since it's part of my Diamond code base.)