From RoboWiki
Jump to: navigation, search


Notes on 0.36c

This version has yet to reach 2000 battles but has full pairings and seems to be on par with Gaff. They have a lot of code in common but the movements are different. Gaff is multi-mode with a pure random mode (decent vs top bots) and a dodging mode (basically very weak wave surfing, good against simple targeters), whereas Pris has only one mode trained through reinforcement learning. From the current results it looks like the learning method has been able to create a style that's not just random noise, since it can match Gaff's performance. Which is encouraging.  :) --Darkcanuck 05:43, 11 June 2009 (UTC)


Very cool that you're trying some neural surfing, and even cooler that it's working so well. I think you roughly cracked the former-2000 barrier here, and you're only a few spots away from Engineer, the top (and only other?) pure-NN bot. Nice work! --Voidious 04:02, 28 August 2009 (UTC)

Thanks! I'm finding movement far more challenging than targeting. I have a few more easy tweaks for Pris but after that I may have to take a break and do something different (melee?) in order to generate more ideas. Still waiting for Holden to reach 2000 battles so I can unleash the next version... --Darkcanuck 04:46, 28 August 2009 (UTC)

Nice stuff indeed! And yeah, I'd say that movement generally is far more challenging. With targeting you mostly just store values and predict values, but movement involves the bot 'planning' and works in two resultant dimensions rather than one. --Rednaxela 04:52, 28 August 2009 (UTC)


Big congrats on top 20! And becoming the top NN bot (by a margin). Pris is a really cool bot. Keep up the good work. =) --Voidious 14:39, 19 September 2009 (UTC)

Thanks, top-20 has been a longtime goal of mine. But I should have taken a screen shot when Pris was in 1st place after 6 battles... =)
Now I need to figure out how to defeat MirrorMicro. It's always been a problem bot for me, and it's one of Pris' only 6 losses. I've watched many battles between the two and Pris can't hit her own movement as well as MirrorMicro does, which is quite bizarre. Pris trounces other mirror movers so that's not the problem. MirrorMicro's targeting looks unusual too -- it's supposed to be circular but isn't. --Darkcanuck 18:15, 19 September 2009 (UTC)
Hmm, well, if you look on oldwiki:Mirror, read ABC's comment maybe? --Rednaxela 18:29, 19 September 2009 (UTC)
After reading your comment, I've been trying to understand MirrorMicro 1.1's targeting. :) If I understand it correctly: about every turn it creates a wave, and when a wave reaches the target it records that wave's relative angle. It adds the latest determined relative angle to its gunheading. The confusing thing is that guessAngle (the relative angle) is a static variable, but is accessed like an object variable. --Positive 21:25, 19 September 2009 (UTC)
Oh and, congrats. :) --Positive 21:36, 19 September 2009 (UTC)
Hmmm, that's really helpful. I peeked at the source just now and I guess MirrorMicro is basically aiming using the offset that would have worked for the current wave breaking over the target? There should be a way of reversing this and using it as a movement input, but it probably would require firing movement waves every tick which Pris doesn't currently track... --Darkcanuck 23:23, 19 September 2009 (UTC)
On that note, DrussGT (and Wintermute) use the GF of the currently breaking wave as a dimension in it's gun, so it's probably exploiting the same weakness as MirrorMicro. I knew that dimension would come in handy some day =) --Skilgannon 08:44, 20 September 2009 (UTC)
I do remember you mentioning this once when I asked about targeting segmentation. But I never clued in that this is why I have yet to make any headway on improving Pris' score vs DrussGT. I guess you can count on me adding this to the NN movement in the next release...  ;) --Darkcanuck 04:32, 21 September 2009 (UTC)
Thanks again for both of your help -- my dev. version now handily beats MirrorMicro with this dimension added to the movement. No significant improvement vs DrussGT and my Shadow score has actually dipped though. --Darkcanuck 23:19, 21 September 2009 (UTC)

Congrats man, good job there. I have a request for future versions though. If you are going to beat YersiniaPestis, please beat Shadow too, I can't hold the PL crown if you don't ;-). --zyx 02:36, 20 September 2009 (UTC)

Sorry about that. My best (new) dev version only beats Shadow 30% of the time, but if I can improve it, you can have your crown back. Although if you check the PL rankings, Pris isn't too far back... (tied with DrussGT for 4th) --Darkcanuck 04:32, 21 September 2009 (UTC)

Very nice stuff here with Pris. I expect RougeDC to be pushed further down the rankings soon :) --Rednaxela 03:00, 20 September 2009 (UTC)


It looks like Pris 0.88 will stay 14th. Wow, I'm really impressed! Those neural networks seem to be working pretty well. :) --Positive 16:20, 23 September 2009 (UTC)

I expected correctly :) --Rednaxela 16:27, 23 September 2009 (UTC)

Yeah, really amazing what you're doing with Pris. A neural net bot on par with CassiusClay is a serious Robocode milestone, if you ask me. Keep it up! --Voidious 17:09, 23 September 2009 (UTC)

It's nice to keep moving up -- but the climb keeps getting steeper! Looks like I need to update my test set as a few new problem bots have crept in (lost to Pytko?). The movement nets are indeed great: they add learned avoidance with a touch of unpredictability. I still can't put in Gaff's latest gun though, as performance always drops a bit, possibly due to skipped turns. --Darkcanuck 04:43, 24 September 2009 (UTC)


I just got @roborumble back in action, and - damn, nice work! ;) I wonder if there's any chance this means Diamond gets better than 42% against her now? So awesome that you and GresSuffurd are right on the heels of Ascendant now. That bot is such a milestone in my mind. --Voidious 03:59, 18 August 2011 (UTC)

Thanks! Ironically I've been gunning for a higher PL score, but the increased APS (and rank) is nice too.

I have problem with Pris 0.92 or awl.Locutus 1.0 on my rr client, there' log:

Fighting battle 9 ... darkcanuck.Pris 0.92,awl.Locutus 1.0
Robot darkcanuck.Pris 0.92 is not stopping.  Forcing a stop.
Robot darkcanuck.Pris 0.92 is not stopping.  Forcing a stop.
Robot awl.Locutus 1.0 is not stopping.  Forcing a stop.
Robot darkcanuck.Pris 0.92 is not stopping.  Forcing a stop.
Robot darkcanuck.Pris 0.92 is not stopping.  Forcing a stop.
Robot darkcanuck.Pris 0.92 is not stopping.  Forcing a stop.
Robot awl.Locutus 1.0 is not stopping.  Forcing a stop.
Robot darkcanuck.Pris 0.92 is not stopping.  Forcing a stop.
Robot darkcanuck.Pris 0.92 is not stopping.  Forcing a stop.

Check, may be it's Pris's problem. I can send Pris's data files, if it's can help --Jdev 10:26, 19 August 2011 (UTC)

Strange... Pris doesn't save anything (well, it shouldn't) so that won't help. Was it just the one battle or have you seen this happen other times too? --Darkcanuck 13:39, 19 August 2011 (UTC)
There're 3 files in Pris's data dir: basedata.ser, battledata.ser and But may be it's data of old versions of Pris. I have seen only one battle with this log at this moment. I don't know, has it impact, but when this problem occurs another process on my machine takes all CPU time --Jdev 14:15, 19 August 2011 (UTC)
Er... you shouldn't be running the rumble on a machine where some other process could start taking all CPU time. That could be hurting the results of any robot --Rednaxela 15:04, 19 August 2011 (UTC)
No one process shouldn't start taking all CPU time. But some times it's happens with buggy software:) --Jdev 15:54, 19 August 2011 (UTC)
I would not run the RoboRumble client unless you have at least one free CPU core to run it the whole time. The CPU constant used by Robocode assumes it has sole access to the CPU. Well, if that's the situation when you calculated it - but any other setup would be inconsistent and make bots skip turns (or, in the extreme case, be stopped) unfairly. --Voidious 16:10, 19 August 2011 (UTC)


Thread titleRepliesLast modified
Dodging Performance Anomaly?1002:24, 17 November 2013

Dodging Performance Anomaly?

I have recently discovered robocode, and I made a relatively simple bot using DookiCape movement and a simple but somewhat unique segmented GF gun(distance, lateralV, advancingV). I don't know if you are still interested in improving your robot, but I noticed that after a while in a 1000 battle of my bot vs Pris my hit rate went much higher than it should have, all the way up to 20%. I don't know why my targeting is working so well, I don't even decay old data right now. You may want to look into Pris' surfing for bugs etc. P.S. (My bot won the 1000 round battle with a 78% score share. By comparison, my bot scores 35% vs Shadow in a 1000 round battle)

Straw (talk)00:52, 16 November 2013

Unfortunately, I don't think Darkcanuck comes around here any more. That is interesting though. I wonder if something about the neural net gets corrupted? I remember that TheBrainPi, which saves its neural net to disk between matches, had a bug that was solved by deleting its neural net data file (so it could start fresh, I guess).

It's also worth noting that RoboRumble matches are 35 rounds, so that's what many of us use in most or all of our testing. I bet a lot of top bots have issues in 1000 round battles.

And welcome to the wiki. ;)

Voidious (talk)01:02, 16 November 2013

Thanks for the amazingly fast reply. And for the movement system, I've only been working on robocode for about a month, and I started on targeting first. Another interesting thing is that DrussGT only scores 73% with a 17% hitrate against Pris, worse than mine, yet it totally trounces my bot in a direct battle. Has anybody thought about using Random Forest for movement or targeting? It uses an ensemble of decision trees for classification. Its slow to generate a forest, but running data through one is pretty fast. I could imagine a robot which only retrained a few of the trees in a forest every tick. Seems somewhat similar to what people are doing with K nearest neighbor classification.

Straw (talk)01:37, 16 November 2013
Edited by author.
Last edit: 21:18, 16 November 2013

I've looked at random forests before, another one which interested me was Extreme Learning Machines which are feed-forward NNs working in an ensemble. The trouble I found was that even though these methods are fast when compared to other machine learning techniques (K-means, feedback NN, SVM), they are still much slower than a single KNN call in a Kd-Tree just because of the amount of data they need to touch for each 'event'. A Kd-Tree trains incrementally in O(logN) and classifies in O(logN), with N being the number of items in the tree. I think the only thing faster would be a Naive Bayes classifier.

Feel free to prove me wrong though =) I'd love something which works well beyond the ubiquitous Kd-Tree!

Another thing to consider is how you are going to pose the question. A lot of the successful NN-based approaches have used a bunch of classifiers, one for each potential firing angle, and shooting at the one with the highest probability. Others have tried posing it as a straight regression problem, but I don't think those worked as well, possibly because of the high noise (against top bots you are lucky to get a 10% hitrate).

I'd be interested to hear what you end up trying, and how it works out.

Skilgannon (talk)10:06, 16 November 2013

Another neural net thing I haven't seen in any bots is recurrent neural nets with memory. I've heard they are very good at certain types of problems, where they need to be able to recall old information while still learning new stuff. I don't know exactly how to implement the back-propagation algorithm on them, and I am no NN expert, but it seems as if they might be good against adaptive movers.

I have heard that SVMs are significantly slower than RF, but work better on smaller data sets. Since RF can handle both categorical and numerical outputs and predictors, you could pose the problem asking for a GF, or a bin to fire in. I'm not sure if you can get multiple outputs out of it. Another nice thing is that you wouldn't have to weight all your different predictors because RF figures out which ones are important for you. I plan to test RF out of a bot on data gathered from battles soon. Even if it doesn't work well or is too slow, it could still determine good weights for predictors used in a KNN algorithm.

In all these classification algorithms, if speed is a big issue, why not make a system allowing the spread of calculations over multiple ticks? It seems like you don't need to train every tick in general.

Straw (talk)21:17, 16 November 2013

Recurrent NN uses a lot of memory and processing power, both of which are fairly limited in the RoboCode setting. Speed is definitely the main issue, particularly when a lot of the time is already taken doing predictions to give more relevant features for classification. Even with spreading calculations over multiple ticks many popular techniques Just Wouldn't Work.

If you can't get multiple outputs out of the RF, just run a bunch of them, one in each bin, and choose the bin with the highest probability. Ie, each bin is a different class and you choose the most probable class. Quick and dirty regression without inter-dependency. I've actually thought about trying a Naive Bayes like this, just for kicks. I think Pris and a few others do their NN classifications this way.

Skilgannon (talk)21:27, 16 November 2013

I've looked at random forests before, but only briefly, and only because I saw on Wikipedia that they are like the state of the art in machine learning classification algorithms. :-P The other classification system I've always wanted to tinker with was Support Vector Machines, which I learned about in school and seemed really cool/versatile.

My main efforts to top KNN have been clustering algorithms, mainly a dynamic-ish k-means and one based on "quality threshold" clustering. I managed to get hybrid systems (meaning they use KNN until they have some clusters) on par with my KNN targeting, but getting the same accuracy at a fraction of the speed wasn't useful.

KNN really is fast as heck and just seems perfectly suited to the Robocode scenario. But Pris is pretty damn impressive with the neural nets and I'm sure someone could do better.

Voidious (talk)19:32, 16 November 2013

Even in a 35 round battle, my robot still barely wins at 51%, which is far higher than its score against comparable bots in the rumble. This is also with no preloaded GF data, it blindly fires HOT at first.

Straw (talk)01:44, 16 November 2013
Personal tools