Dodging Performance Anomaly?

Jump to navigation Jump to search
Revision as of 16 November 2013 at 19:32.
The highlighted comment was created in this revision.

Dodging Performance Anomaly?

I have recently discovered robocode, and I made a relatively simple bot using DookiCape movement and a simple but somewhat unique segmented GF gun(distance, lateralV, advancingV). I don't know if you are still interested in improving your robot, but I noticed that after a while in a 1000 battle of my bot vs Pris my hit rate went much higher than it should have, all the way up to 20%. I don't know why my targeting is working so well, I don't even decay old data right now. You may want to look into Pris' surfing for bugs etc. P.S. (My bot won the 1000 round battle with a 78% score share. By comparison, my bot scores 35% vs Shadow in a 1000 round battle)

    Straw (talk)01:52, 16 November 2013

    Unfortunately, I don't think Darkcanuck comes around here any more. That is interesting though. I wonder if something about the neural net gets corrupted? I remember that TheBrainPi, which saves its neural net to disk between matches, had a bug that was solved by deleting its neural net data file (so it could start fresh, I guess).

    It's also worth noting that RoboRumble matches are 35 rounds, so that's what many of us use in most or all of our testing. I bet a lot of top bots have issues in 1000 round battles.

    And welcome to the wiki. ;)

      Voidious (talk)02:02, 16 November 2013

      Thanks for the amazingly fast reply. And for the movement system, I've only been working on robocode for about a month, and I started on targeting first. Another interesting thing is that DrussGT only scores 73% with a 17% hitrate against Pris, worse than mine, yet it totally trounces my bot in a direct battle. Has anybody thought about using Random Forest for movement or targeting? It uses an ensemble of decision trees for classification. Its slow to generate a forest, but running data through one is pretty fast. I could imagine a robot which only retrained a few of the trees in a forest every tick. Seems somewhat similar to what people are doing with K nearest neighbor classification.

        Straw (talk)02:37, 16 November 2013

        I've looked at random forests before, another one which interested me was Extreme Learning Machines which are feed-forward NNs working in an ensemble. The trouble I found was that even though these methods are fast when compared to other machine learning techniques (K-means, feedback NN, SVM), they are still much slower than a single KNN call in a Kd-Tree just because of the amount of data they need to touch for each 'event'. A Kd-Tree trains in O(1) and classifies in O(logN).

        Feel free to prove me wrong though =) I'd love something which works well beyond the ubiquitous Kd-Tree!

        Another thing to consider is how you are going to pose the question. A lot of the successful NN-based approaches have used a bunch of classifiers, one for each potential firing angle, and shooting at the one with the highest probability. Others have tried posing it as a straight regression problem, but I don't think those worked as well, possibly because of the high noise (against top bots you are lucky to get a 10% hitrate).

        I'd be interested to hear what you end up trying, and how it works out.

          Skilgannon (talk)11:06, 16 November 2013
           

          I've looked at random forests before, but only briefly, and only because I saw on Wikipedia that they are like the state of the art in machine learning classification algorithms. :-P The other classification system I've always wanted to tinker with was Support Vector Machines, which I learned about in school and seemed really cool/versatile.

          My main efforts to top KNN have been clustering algorithms, mainly a dynamic-ish k-means and one based on "quality threshold" clustering. I managed to get hybrid systems (meaning they use KNN until they have some clusters) on par with my KNN targeting, but getting the same accuracy at a fraction of the speed wasn't useful.

          KNN really is fast as heck and just seems perfectly suited to the Robocode scenario. But Pris is pretty damn impressive with the neural nets and I'm sure someone could do better.

            Voidious (talk)20:32, 16 November 2013
             

            Even in a 35 round battle, my robot still barely wins at 51%, which is far higher than its score against comparable bots in the rumble. This is also with no preloaded GF data, it blindly fires HOT at first.

              Straw (talk)02:44, 16 November 2013