User talk:Voidious/TripHammer

From Robowiki
< User talk:Voidious
Revision as of 18:45, 5 October 2010 by Voidious (talk | contribs) (→‎k-means vs KNN: some thoughts)
Jump to navigation Jump to search

Initial chatter

Interesting gun... I wonder, if you smooth all hit into an array or using DrussGT-like method, it should be a lot faster. » Nat | Talk » 04:50, 13 September 2009 (UTC)

Ooh.. interesting... this makes me more inclined to make my try my cluster-tree idea, which I started to implement but then got distracted with melee things.. but I do want to get my melee stuff more finished before I got back to that.. hmmm --Rednaxela 05:16, 13 September 2009 (UTC)

I wonder how this would compare with one of our regular k-nearest-neighbors guns using 1/250th of the number of scans? Also, you say you're using precise GFs, what method are you using on finding an angle? The method Rednaxela and I use, which is best described as precise-range-overlap, is the best I have tested. Also, I assume this means you've written your own precise-GF utility method. I remember when writing mine something I was grateful for was that bots in robocode are represented by non-rotating rectangles =) --Skilgannon 07:42, 13 September 2009 (UTC)

The precise GF code is just the same as Dookious has been using for ages. I think Dooki was the first bot to find the advantage to using precise GFs, actually =), though I know PEZ had experimented with it even before that. Anyway, it's not that precise that I am looking at bot width... just that I use precise prediction to calculate the max escape angle instead of just Math.asin(8 / bullet speed). I calculate the escape angle the enemy could reach if he moved full speed perpendicularly (and just ran into any walls), then the escape angle the enemy would reach if he did the same but with Wall Smoothing, and I take the larger of the two. I have been wondering if it's worth tweaking this to be even more precise, perhaps I'll take a look. Funny to look and see that the last change I made to Dookious was tweaking the precise GF code in 1.573...
About the KNN comparison, do you mean which would perform better with 1/250th the data? Or if this gun had 1/250th the data so that the clusters were of similar size? It's also worth noting that switching from displacement vectors to (precise) GFs was an immediate boost in score - I may run some TCRM seasons with Diamond's gun switched to use GFs for a fairer comparison. Diamond gets ~89.85 with its main gun, even with displacement vectors.
--Voidious 08:56, 13 September 2009 (UTC)
Ha! When I read "Dooki was the first bot to find the advantage to using precise GFs", I start to dig myself into Dookious code before I return and read "Anyway, it's not that precise that I am looking at bot width...". I think Doki is just only robot that use precise MEA now. For the KNN comparison, I think Skilgannon mean the later, he wants to test k-NN with similar size. » Nat | Talk » 09:36, 13 September 2009 (UTC)
Phoenix also uses precise MEA in its gun, I'm pretty sure. And Diamond doesn't use it for precise GFs, but it does use (precise MEA / traditional MEA) as its wall distance attribute in 1v1. --Voidious 14:31, 13 September 2009 (UTC)
Yeah, what I mean is that each cluster has 1/250th of your data, and you use the entire cluster to determine what the angle to shoot at should be, so why not use KNN with your K set to size/250 for a comparison? And there's been a bit of confusion I think, what you do is precise MEA, not precise (ie. subpixel) GF hitranges like Garm, RougeDC, DrussGT (in the gun only) and Wintermute. --Skilgannon 10:55, 13 September 2009 (UTC)
Ohhh, gotcha. Sorry for my half asleep ramblings, then. =) I've always thought of that as "precise bot width" or "precise intersection". I still think "precise GF" is an accurate description of what I do here (along with "precise MEA"), since in this system, GF1 is precisely the maximum bearing offset the enemy can reach in his current orbital direction, which is what GF1 is meant to describe. With traditional MEA, GF1 is just a very rough approximation. No argument that both increases the precision of a GF system, though. --Voidious 14:31, 13 September 2009 (UTC)
As someone who's never ever used 'precise MEA' in a gun... I wonder how using it would affect SaphireEdge, and how much that might boost it into a lead over TripHammer by... :) --Rednaxela 14:48, 13 September 2009 (UTC)
Well, every gun is different, of course, but I bet you'd gain a lot. Dooki's TC2K6 score jumped by ~1 whole point when I made the switch. (See scores on Dookious/DookiLightning - 1.16 added precise MEA, 1.161 tweaked it further.) --Voidious 15:15, 13 September 2009 (UTC)
From my investigations, using precise MEA helped a lot in the targeting challenges but not much in the rumble - no idea why =) FWIW the DrussGT in the TCRM isn't using any form of precise MEA. --Skilgannon 16:40, 13 September 2009 (UTC)
I gained almost 10 points when I implemented precise MEA in Dookious, it's what sent me past 2100... however, I later found what might have been a performance enhancing bug in that same version, so I can't say for sure. In the RoboRumble Gun Challenge, precise MEA gained ~6 points (legitimately). It makes sense precise MEA would have more effect in TCs, because with power 3 bullets, the walls impact the MEA a lot more. --Voidious 16:48, 13 September 2009 (UTC)

Speaking of "Rumble vs TC"... unless I broke something (possible, unlikely I think), Diamond 1.37's k-means gun posts consistently better TCRM scores than the old general purpose one, yet it lost 0.4% APS in the rumble. Ouch! --Voidious 14:28, 14 September 2009 (UTC)

0.8498

The big "breakthrough" here (91.35 in TCRM) was doing KNN instead of pulling in nearby clusters when I don't have enough data. My KNN cluster size is min(data points / 30, 250), and whenever the nearest k-means cluster has less data than that, I do KNN instead. Previously, I was pulling in up to 5 nearby clusters as long as I had less than 250 data points, a mechanic which I'd tweaked up to 90.96.

By comparison, if 0.8498 chooses its KNN option all the time, it scores 91.20. So the k-means is only giving a small (but measurable) boost, presumably in the SuperNodes. Tweaking the KNN cluster size might close the gap, but on the other hand, tweaking how/when I use KNN could widen it further.

As a sidenote, I think there are still a lot of ideas worth exploring with different forms of clustering. I've got one more idea I'm going to try next, and I'll also keep tweaking the k-means / knn hybrid.

--Voidious 13:20, 30 September 2009 (UTC)

Congrats! So k-means does have some good over kNN. Look forward for the real robot! Nat Pavasant 13:24, 30 September 2009 (UTC)

Wow! That's an amazingly high TCRM result there! Makes me have this urge to work on my cluster-tree idea... but I already have Glacier and SaphireEdge on the table, haha --Rednaxela 13:57, 30 September 2009 (UTC)

Whoah -- looks like you've cracked open the RM ceiling that DrussGT has been slowly pushing upwards. Nice work! --Darkcanuck 15:27, 30 September 2009 (UTC)

Wow! Do you mind if I take at look at (and take ideas from) how you are defining your dimensions? Because if you're scoring 91.20 with just the KNN I think there's some magic you've got here that isn't just the clustering... --Skilgannon 17:56, 1 October 2009 (UTC)

Be my guest. =) DistanceTripHammer.java defines the distancing function, and DiamondFist.java is where the bulk of the gun data collection takes place. Precise MEAs for the wall distance and how I split up velocity, sin(relative heading), and cos(relative heading) are the only abnormal parts of the dimensions, I think... Good luck. --Voidious 18:13, 1 October 2009 (UTC)

k-means vs KNN

Well, after I found a small bug in the k-means code that would also affect the KNN part, I re-ran 100 seasons of the k-means/KNN hybrid and KNN by itself. I got 91.29 TCRM in both runs. So I think it's safe to say the k-means isn't doing much for me here, and I may switch it off in Diamond. I still think there may be something to gain with having a more sophisticated clustering algorithm backed by KNN, particularly so that you can flex the full power of SuperNodes in the very dense areas of the graph. But I can also accept that KNN actually makes a whole lot of sense in a continually growing data set. --Voidious 21:59, 5 October 2009 (UTC)

I am not sure if I ever mentioned this, but once upon a time, I wrote a k-means clustering gun. In the end knn is really much more efficient and much faster. Since k-means categorizes data based off arbitrary points in data and then scooting them as needed, after which you determine which group your current data is in and use that group, and such placing the center of your current data as a single group and just finding it's nearest neighbors means you have both more relevant data and can skip making recentering many groups. Thus I found k-means took around n+log(n+1) times longer than KNN, where n is the number of groups (in k-means). --Chase 11:15, 14 March 2010 (UTC)

I knew you'd tried it, but didn't know much about what you did. I'm curious, did you have some type of dynamic variant of the normal k-means algorithm, or were you redefining clusters from scratch each time you aimed? The latter is very slow indeed, but the former... is still slow, but fast enough not to skip turns.

I disagree that just because KNN puts your current point at the center of the cluster means the resulting classification is more relevant. If that were so, nobody would do any type of clustering for classifying a data point, they'd just use KNN. =) But KNN is certainly many times faster, and so far the difference between it and k-means is negligible in my tests. But my KMC classifier (which uses KNN until it has enough data) is barely outperforming my KNN at the moment...

--Voidious 19:45, 14 March 2010 (UTC)

I did the latter, at the time I wasn't the most innovative programmer. It did skip a considerable number of turns. --Chase 17:08, 17 March 2010 (UTC)

I'd personally say that the main reason to use clustering over KNN really, is when the number of possible categories to classify into is discrete. In the case of targeting, the range of possible angles is continuous. Due to such, I'd only expect clustering to have any possible advantage in the case of highly modal enemy behavior. Perhaps a good approach would be to cluster the enemy movement into a few large clusters, trying to automatically detect when clusters are redundant and when they are not, and then within each clustering doing a KNN search. Might work... --Rednaxela 21:02, 14 March 2010 (UTC)

Well, another reason imo is to recognize when you have tons of relevant data (SuperNodes). You could do a "within k distance" instead of "k nearest neighbors", but calculating that on the fly for situations where you have 1500+ very close data points may not be so feasible. --Voidious 21:27, 14 March 2010 (UTC)

Well for that I figure you would have to determine the current area the data takes up (really simple if you have a kd-tree with hyper rectangular bounds checking), and the mean distance between data in each dimension, then use those to determine the optimal k distance and the number of clusters to use. The only other option is to do gaussian density calculations, which would probably produce more better cluster locations, but would require you to check every point every so often. --Chase 17:08, 17 March 2010 (UTC)

Hi, Voidious! can you explain me why K-means gun performs better than KNN?

I think about it and it's looks like paradox:) For example, if data point, which is searched, is placed on border of cluster, then K-means select points, which far from this point in current cluster, but KNN select more relevant data from both clusters.

It seems for me, that K-means is equals to dinamic segmentation method.

I develop clustering method which split 2d points on battle field on not bad clusters with complexity about (N/clusters_count). i think it's also applicable for KD points and, if it's interesting for you, i can share it --Jdev 17:07, 5 October 2010 (UTC)

Well, actually, I eventually concluded it was not better. The results ended up about the same so I went back to KNN because it's simpler/faster. But I can offer some thoughts on why it was worth exploring:

  • For very dense areas of the graph, KNN will ignore a lot of relevant data. With K-means, you might have a large cluster that has 2000 points in it, and using all of that data is good. In another area, the cluster may be very small. Visit Count Stats / Segmentation also has this advantage.
    • Another related idea, of course, is instead of doing K-nearest neighbors, do all neighbors within X distance. I haven't experimented with that idea much, but it has been a part of some other clustering algorithms I've tried.
  • It's possible that a point on the edge of a cluster should be classified in that cluster, not simply to the nearest points. It depends on the characteristics of the data. I don't think this is true for most Robocode movements, but I wouldn't know until I tried. =) There are lots of applications where people use clustering algorithms to classify data and it works better than KNN would, even though they all suffer from this "edge of cluster" example.

I'd certainly be interested to hear about any clustering experiments you try. I have tried a few others but without any major success. Clustering can be very slow, though, while KNN is both fast and dynamic - so a well tuned KNN may always be optimal for Robocode.

--Voidious 17:45, 5 October 2010 (UTC)