From Robowiki
Jump to navigation Jump to search

Initial Discussion

Fire away...

Just a suggestion for an additional check. I have never seen score a bot more than 8000 points, so this could be checked too. When examining the results that messed up the original roborumble rating beyond repair, I saw results of 20000 against 16000 (Thats what you get when running OneOnOne with MELEE=YES). For the time being I let my client running (unattended) for ABC's server, as I don't really have the time for bughunting. Your effort however seems promising. Good luck. -- GrubbmGait

  • Thanks! That's a good check, will be combining that with the survival >=35 (also your suggestion I think) once I rearrange the error handling and failure output to the client. Then I'll look into ELO... --Darkcanuck
  • Your checks have both been implemented. -- Darkcanuck

Looking very nice! I have a couple questions and thoughts I thought I'll mention. So what is this "Ideal" column in the results mean? One thought I had about ratings, is perhaps it would be best to make the APS fill missing pairings with Glicko-based estimates? I'm thinking that would give the best long term stability/accuracy once pairings are complete while having something a more meaningful before pairings are complete before the pairings are complete. --Rednaxela 01:18, 26 September 2008 (UTC)

Thanks! I've just posted a bit more about ratings here. The "Ideal" column is my attempt to reverse calculate a rating based on a bot's APS. I just inverted the Glicko formula for "E" (expected probability) to yield a rating given if given E (i.e. APS) and a competitor's rating and RD. For the latter two I used the defaults (1500 and 350) so theoretically if the APS represents the score vs an average bot (and there's a uniform distribution?) then the rating might converge to the "ideal" value. But I have no idea if it works, just wanted to see how close it might be. I'm not sure you could fill in the pairings using Glicko + APS -- the reason systems like Glicko exist is to get around the problem of incomplete pairings, so the Glicko rating should be enough in itself. If it's accurate, that is -- we'll see once the ratings catch up to the pairings already submitted... -- Darkcanuck 03:39, 26 September 2008 (UTC)

Ahh, I see. Thanks for the explanation. If the Glicko rating doesn't converge very very close to the "Ideal" then I'd say it alone might not be the best fit alone for Robocode due to how complete pairings are not hard to get. The reason I suggest using APS and filling missing pairings with Glicko-based percent estimates, is because my proposed method will be guaranteed to always converge to an exact APS ranking order when pairings are complete, and would quite surely be at least slightly better than APS when pairings are not complete. Perhaps I'm more picky than most, but I'd consider a hybrid necessary if "Glicko" doesn't in practice converge to "Ideal" to within an accuracy that preserves exact rankings with APS (which I think is very plain and simple the most fair when there are complete pairings). I suppose we'll see how accurately Glicko converges :) --Rednaxela 04:25, 26 September 2008 (UTC)

Be careful about the "ideal" convergence concept! Keep in mind that I made this value up and it doesn't really have a statistical basis of any sort. I was just curious what a naive reversal with a single data point might produce, in order to get an idea of what neighbourhood DrussGT's rating might be in, for example. I also wanted to get a sense whether I had programmed the formulas correctly. I wonder though, if we're abusing these rating systems by using %score instead of absolute win/lose values (1/0)? Would the Glicko rating converge more rapidly to match the APS scale if I had chosen win/loss? I'm very curious, but no so much as to interrupt the current rebuild, which may take longer than I thought. -- Darkcanuck 04:54, 26 September 2008 (UTC)
Well, I'm not talking about the convergence to that "Ideal" column. I'm talking about convergence of the relative rankings as opposed to specific rating numbers. If the rankings, don't converge to exactly the same order as APS, then I think there's issue enough to justify a hybrid that uses APS, with ELO or Glicko to estimate missing pairings. --Rednaxela 05:10, 26 September 2008 (UTC)
Gotcha. I suppose you could keep track of the rating (Elo or Glicko) and just use it to calculate expected scores for missing pairings. Then generate an estimated APS for full pairings. We'll have to see how well the ratings stabilize. I'm thinking I should have used Glicko-2 instead, since it includes a volatility rating to account for erratic (read problem bot) performance. -- Darkcanuck 06:22, 26 September 2008 (UTC)

Started sending the results to your server, as long as you relay them to ABC's server. What is the delay btw? --GrubbmGait 10:08, 26 September 2008 (UTC)

Thanks for joining in! I have no plans to stop relaying results and have been doing so for almost a week now. If by "delay" you mean occasional slow connections, it's due to the scoring update and I've posted it on the known issues page. I have this process cranked up at the moment while I try to get the ratings to catch up, but it will get faster soon. :) -- Darkcanuck 15:25, 26 September 2008 (UTC)

Great job with his server, you can always get the ranking/battles_* files from my server and sumbit them all into yours. I'm also experimenting with mySql atm. My SQL skills are a little rusty but it's all coming back pretty fast :).

I also have a few doubts about the new ranting method. The first one is: why? From what I understand Glicko is an ELO extension for rankings where the match frequency is not uniform between participants, which is not the rumble's case? As an experiment it's very cool, but for me the "old" ELO method is time tested and proven to work great, and should be the default sorting method for the ranking table. --ABC 11:23, 26 September 2008 (UTC)

I also have some doubts about if Glicko will actually give better or much different results than ELO, however I'm not sure ELO is really the best default ranking system when full pairings are easiest to get. I suppose we'll see once your server gets to full pairings, but I'm strongly suspecting there will be some ranking deviations from the APS ranking, which I think is hard to argue is in any way biased. --Rednaxela 13:26, 26 September 2008 (UTC)
I have doubts as well, but I wouldn't have known until I tried it. My major objection against Elo is the lack of a clear, published implementation. It was easier to implement Glicko than to sort through the RR server code. If someone can clarify this for me, sure I'll try it out. Why not? -- Darkcanuck 15:25, 26 September 2008 (UTC)


I just want to leave a note saying you're awesome. :) It's really nice having someone put effort into improving the rumble itself. Good work! --Simonton 03:27, 11 October 2008 (UTC)

Oh, and FNL, if you're reading this, that goes double for you :). --Simonton 03:30, 11 October 2008 (UTC)


Do you think you or I could restyle the page, some basic css could go a long way to making the page look more modern and less of an eyesore. An example of my work is here, though it wouldn't look like my page there but it will be clean (and it will validate). Currently its not even setup as a webpage. Which means it will render in quarks mode by all browsers, which is a very slow and cpu intensive rendering mode.

In fact there is alot you can do to both reduce html elements and increase rendering speed. Such as changing the <td><b> combo into just <th> tags, because thats what they are for <td> = table data, <th> = table header, with some css you can justify thier alignment, it wouldn't require much css, as such alot of css is actually undesirable in a simple page such as this, but css is perferred over tags because it is actually faster in most cases (very old or poorly designed browsers being the exception).

Chase-san 08:02, 14 October 2008 (UTC)

  • I very strongly agree! It's on the roadmap, but I've focused on the data side first. The current "pages" were based on a view-source from the old server. A little css and valid xhtml would go a long way. I also want to switch to a template system (maybe Smarty or Zend?) for easier reading and better reuse -- having html mixed in with php makes for some very ugly code. If you want to style some static content and send it to me, that would be great! --Darkcanuck 15:04, 14 October 2008 (UTC)
xhtml is very nice, while most browsers support true xhtml (except IE and Konqueror), the ones that do not, control a large enough majority where it would have to be described as text/html anyway. This mitigates the real purpose of an xhtml page but is nice to have the framework in place for when they catch up (all the work that would have to be done is switching the content type). I think using Smarty or Zend is overkill unless you plan on extending the system further and I only suggest them only if you plan on doing something like They are template engines, meaning you would have to make templates for them, which jsut adds alot of extra overhead on something simple like this. Remember KIS, keep it simple. --Chase-san 21:36, 14 October 2008 (UTC)
If you really want a really nice super-quick super-simple "template engine" I suggest you consider this. Instead of bothering with special "template" languages, you write your templates just in PHP, and all the "template engine" does is set up variables making really clean shorthand like <?=$title;?> all that's needed to put some variable in the template. I once tried it when hacking around and found it to be a really nice KIS approach to "template engines". Also the author put the code there in public domain, so there are no issues using it in here as we see fit. --Rednaxela 03:47, 15 October 2008 (UTC)
I at one point designed my own KIS template system, it was simular to others except that the content to replace was in {}, for example {title} and then for other parts I did things like <table>{row_start}<tr><td>{row_num}</td><td>{row_data}</td></tr>{row_end}</table>. All this was kept in a seperate file and required parsing, but otherwise was fairly simple that it was a template engine but also it only used half a dozen commands and you used the functions to fill in the data. I will see if I can locate it or remake it if you like the sound of using a template but still want to keep it very simple. --Chase-san 22:41, 15 October 2008 (UTC)
Thanks for the suggestions guys, but I'm sticking to my original plan (Smarty). If the template engine ever becomes the bottleneck, then I'll look into something custom. --Darkcanuck 02:13, 16 October 2008 (UTC)
Okay, cool. I would like to work on a template for the actual score page then, I am great at css and making it cross-compatible with other browsers (namely IE, Firefox, and Safari (I use Opera, so obviously it will work for that too)). Unlike making robots, web pages are not very time consuming. Do you have any kind of messenger we could talk (I have or can get any of them) --Chase-san 04:08, 16 October 2008 (UTC)
Excellent! Don't use messenging much, and I'm currently travelling at the moment so email is better: jerome-at-darkcanuck-net --Darkcanuck 23:43, 16 October 2008 (UTC)

Team Rankings

Is it an idea to get the battles for teams from Pulsars server? I think they have no weird results, and your ranking will at least have a teamranking then. --GrubbmGait 17:57, 24 October 2008 (UTC)

Good idea! I'll grab the battle file, but need to figure out how to exclude older team versions to keep the server load down. --Darkcanuck 01:58, 25 October 2008 (UTC)

Table Sorting

Very nice things things lately! I do have a couple little gripes though. One thing, is I think it would be more natural if first click does 'highest-first' unlike how TableSorter seems to operate by default. Secondly... ugh... it's so damn slow to sort. Even on my fairly modern system there's a very ugly delay when sorting the table (a 20 year old machine could sort the data faster with static code probably... not everyone uses Google Chrome or an experimental FF build) and I imagine this would become a very annoying delay on anything older. Not only is the JS sorting slower than server-side but there's no indication of it processing/loading which irks me a little. Perhaps if the JS sorting is stuck there should be a little line or two of code to makebo a 'loading...' indicator of some sort? In any case, great work lately! --Rednaxela 21:39, 26 October 2008 (UTC)

The problem with javascript when sorting big tables is not the sorting in itself but the big number of DOM document chand hges when you generate the resulting table HTML. I'm currently developing a small javascript application at work that sorts a table of around 500 entries pretty much instantaneously. It only shows the top 5 entries as a table (similar to DC targeting, curiously :)), if I generate the 500 rows it becomes very slow. --ABC 23:14, 26 October 2008 (UTC)

After some reading, I found that apparently TableSorter's slowest part, is how it READS the data from the DOM every time you sort. Perhaps a more efficent method would be to send the data in both HTML form and JSON form, and let the script change the order of the rows in the DOM based on the data efficently parsed from the JSON and stored in the JS memory. I think that model would have the fewest DOM operations and thus be the most efficent way to do client-side sorting. On a related but diverging note... once at that point, it might not be that much more work to do 'live' score updates... (which would also reduce bandwidth use in the face of mad-refereshers). I may be tempted to try and code such a fancy efficent-sorting live-updating score view some time... --Rednaxela 00:19, 27 October 2008 (UTC)

Well, it's faster than re-requesting the page, which the old sort did. :) But if you find a way to speed it up, I'm all ears -- javascript is pretty new to me. I don't like the default sort order either, but there didn't seem to be an option to start with a descending sort. The Glicko columns are also a little weird due to the RD value in brackets. I'm not sure I follow the bit about "live updates" though, the current pages are as live as you can get. Scores are updated every time a new result is uploaded. --Darkcanuck 05:47, 27 October 2008 (UTC)

Actually, I'm finding it very distinctly slower than re-requesting the whole page (of course my campus internet here is pretty damn fast). Well, I think it certainly be sped up by methods like I said above with sending the data in JSON form and keeping in JS memory, though it would likely involve using our own code instead of TableSorter (or mangling TableSorter considerably beyond recognition). And what I mean by "live updates", would be using "AJAX" stuff to every minute or so ask the server if there have been any more recent updates, the server sends any in JSON form and the results page gets updated without refreshing. --Rednaxela 12:35, 27 October 2008 (UTC)


Just another small idea, can you distinguish the contributors per month? Long time ago, late 2004 I think, we had a sortof ranking of contributors when rebuilding the rankins after a servercrash. (He, sounds familiar . . ) This way 'new' contributors see their names without the need to scroll way down. Also: every ranking is a competition ;) --GrubbmGait 19:00, 28 October 2008 (UTC)

Are you saying you don't like my score of 410,000+? ;) (melee is the key to high numbers, btw) Good idea to split the numbers out in more detail. I guess I could add some more columns to the users table to make some rolling counts. What interval would be best: once per day to keep a 30-day window, or start fresh every month to make a new competition? --Darkcanuck 05:40, 29 October 2008 (UTC)
Well I personally think "once per day to keep a 30-day window" would be best for being a more meaningful and current reflection of things, but starting fresh each month would be best if we want to have something like a 'monthly rumble contributor award'. Of the tradeoff, I'm leaning to the former myself. Of course, if we really wanted we could just track both :) --Rednaxela 14:53, 29 October 2008 (UTC)

Ok, we now have current month and last-30-days upload rankings, split by game type. I've tried to scale the melee numbers to match the actual number of battles run (45 pairings uploaded per battle?). Hopefully someone will start to submit team battles (can't get my client to work). --Darkcanuck 23:51, 30 November 2008 (UTC)

Partecipant list

Can you please create a mirror of the official participant's list on your server (updated automatically)? That's good if the official page is off-line, like now ^-^ --Lestofante 22:05, 1 Dic 2008 (UTC)

Try this: . I just uploaded my copy to the server and added the 'pre' tags the rumble client is looking for. Once the old wiki comes back I'll try mirroring all of the participant lists -- shouldn't be difficult, just a daily 'wget'... --Darkcanuck 03:35, 2 December 2008 (UTC)
Thank, now my client work. Here the modify: PARTICIPANTSURL= For the mirroring system just don't use only a wget but use a little script that control the integrity of the list.--lestofante 09:37, 2 December 2008 (UTC)


One thought is, now that removing the Glicko-1 column has cleared up a little space... maybe those survival percents that are in the details pages could be included? I think it would be nice to be able to easily see what bots are strong survivalists ;-) --Rednaxela 23:42, 2 December 2008 (UTC)

Too easy! :) --Darkcanuck 04:53, 3 December 2008 (UTC)
Nice. Now just to wait for all the bots to return so I can see how good 'RougeDC survival' really ranks in that... :) --Rednaxela 05:05, 3 December 2008 (UTC)
It could be a long wait -- reactivation is just as slow as removal. But at least clients won't be fighting over the two. --Darkcanuck 05:13, 3 December 2008 (UTC)
Aye, but at least based on the rate at which my client is currently uploading bots of which some need to be reactivated, I think there's a good chance it may be back to normal in less than 12 hours from now. --Rednaxela 05:29, 3 December 2008 (UTC)

"Suspicious Battle List"

One thought I had is that bad 0 scores could be filtered by taking a look at the expected score, and discarding 0 results where it they seem unreasonable. Of course, an alternative to automatic rejection, would be making a "suspicious battle list" page that could be watched for manually initialting removals. I would imagine it would take no more than a single SQL statement of moderate complexity to list suspicious uploads. --Rednaxela 06:25, 29 December 2008 (UTC)

Neat idea. Although bots which throw the occasional exception may get a lower than expected score once in a while. Rather than run a query against the battles table, the server could flag battles as they're submitted if the score deviates too far from the expected value. What do you think a good range would be, considering some bots have very high PBI's? --Darkcanuck 06:48, 29 December 2008 (UTC)

Well I think running a query against the battles table is necessary due to the number of bad results that are already in the server, which I'd consider quite important to fix and manually searching for all of them would be time intensive. As far as what kind of deviation? Well because of such high PBI cases I'd say something roughly like the following would be good. Flag them if: 1) The battle deviates from the Glicko-2 expected result by more than 50, or 2) The battle deviates from any results submitted by *other* clients by more than 30, or 3) The score is exactly 0 when the expected score is anything greater than 20
Of course, I strongly believe we can't get a really strong idea of exactly what thresholds are good until we to do some queries on the battles database to determine what level of sensetivity is most correct. --Rednaxela 07:07, 29 December 2008 (UTC)

Rednaxela has a good suggestion; I second the motion. I've been seeing the occasional outlier where one or the other bot in a 1v1 match gets 0% bullet score over 35 matches, then in a second battle from the same uploader, it gets a more reasonable score. Results like those would be easily detected with the mechanism proposed above. -- Synapse 00:39, 20 June 2009 (UTC)

Please make a note of these as you see them so we can take a closer look. --Darkcanuck 02:55, 20 June 2009 (UTC)

apv.TheBrainPi 0.5 vs synapse.Geomancy 1 -- Synapse 05:46, 20 June 2009 (UTC)
synapse.Geomancy 1 vs elvbot.ElverionBot 0.3 -- Synapse 05:46, 20 June 2009 (UTC)

See Talk:Geomancy for a continuation of this discussion.

I like this idea, but for 2) it might be a problem if the bad battle is submitted first, and it causes all subsequent battles from other clients to be ignored. Instead a decision should be made, if there is a deviation between battles of more than 30%, which battle is the bad one. --Skilgannon 01:52, 20 June 2009 (UTC)

Well, about #2, I said "*other* clients" in plural. What I meant, was that it would only trigger the #2 check if there were multiple distinct clients that gave a result rather different than an outlier. --Rednaxela 02:19, 20 June 2009 (UTC)

Eventually I'd like a system where users can flag any battle to add it to a suspect battles list. Then we can figure out what to do with them. Although this may be quite difficult given randomness, bots which throw the occasional exception (eg. DogManSPE) and the fact that every large real-world system has outliers.

Well yes, this is why I say "suspect battles list" as opposed to "automatically remove". As far as cases like DogManSPE, well, we could check a bot's tendency to have "outlier" battles overall, and use that as a "baseline". If that baseline is exceeded by a certain amount by either a particular client or robocode version, then it could be considered suspect, in a way that considers cases like DogManSPE. --Rednaxela 05:16, 20 June 2009 (UTC)

Source code

Can I've your server source, please? I've write PHP and MySQL for over 3 years now and I've palnned to create new Thai RoboRumble for my country! Hope you'll give it to me. You can email me at email found on my user page. » Nat | Talk » 09:20, 10 February 2009 (UTC)

Rumble ideas

Hi! I'm very thankful to you for doing the new engine. I was thinking about brand new femto and haiku rumbles. What do you think about it? In my opinion it'd be fantastic if there were these kind of categories. They seem to be really cool and exiting, but unfortunately, there isn't any ranking or challenge for them. Femto can't be hard to implement, maybe haiku is a harder task. I imagined new categories for them with new participants list for them, but I can imagine that the actual bots can have this kind of rank, but then it would lose its importance. --HUNRobar 17:55, 14 February 2009 (UTC)

In femto battle, we really need to modify RR@H client. For Haiku, I think not. It require human to detect how many lines are there. But, wait a minute, I'm now creating my new Rumble Server which support many old rumble ideas and these rumble! That why I want his source code above. If you want to test your haiku bot or femto bot, you can see old ranking and bot in robocode little league by Kawigi. » Nat | Talk » 01:48, 15 February 2009 (UTC)

1 request and 1 question, both relatively simple:

  • On the Rating Details page, could it show the bot ranking also? Currently it is only showed on the list of bots page
  • How should the LRP graph be interpreted? (I know it says PBI on the y axis and ranking on the x, but what does the slope of the line mean?)

--Starrynte 00:40, 13 September 2009 (UTC)

About your question, that line on the graph is the 'mean value' line. I prefer to look at the LRP graph in the form PBI vs. expected score, instead of PBI vs. ranking, but the idea is still the same. Basically, if that line is lower on the side of stronger bots (ie. lower expected score, or higher ranking) then it means you are weaker than expected against stronger bots, and vice versa. --Skilgannon 07:06, 13 September 2009 (UTC)

Valid versions

By the way Darkcanuck, just to let you know:

  • I'm quite sure (NOT plain 1.6.1) is at least as rumble-stable as 1.6.0 is, and is better because it fixed how ITERATE was broken.
  • Also, I'm pretty sure EVERY single version from 1.6.2 to 1.7.1 Beta 2 have been bad for rumble.
  • 1.7.1 Final look like they're probably good for rumble except for:

--Rednaxela 07:34, 3 April 2009 (UTC)

  • Agree. I really like 1.7.1, even in alpha version, compare to, which have a ton of bugs =D I'm figuring out what behind SandboxDT and sure that Fnl is fixing another bug so expected with better rumble :-) » Nat | Talk » 16:07, 3 April 2009 (UTC)

Ok, I can add to the list -- but it won't matter much since that client won't report it's version number either. Nice summary though. (and you know I filed that 1.7.1 melee bug right?) Anyone interested in patching the 1.5.4/1.6.0/ rumble jar(s) with the version check from 1.6.2? --Darkcanuck 15:40, 3 April 2009 (UTC)

  • Either user won't download patched version. I'll try to do. Just getting head spinning around checking out from robocode svn :-) » Nat | Talk » 16:07, 3 April 2009 (UTC)

I see you patched your client already. Just few suggestion, you can (mostly) detect from user suffix. Most user suffix their name with version (except deewiant) so you can check with that. » Nat | Talk » 19:11, 5 April 2009 (UTC)

Yes, but I like guarantees that the right version is being used. :) I've patched both 1.5.4 and to report the client version and I'll post the new jars later today. Once rumble users have switched, then I can turn off the workaround for older clients. --Darkcanuck 22:08, 5 April 2009 (UTC)

I've been using 1.6.0 for the rumble, for what I understand I should install (actually the version I develop with) and replace with the patched jar? Has it been tested, even a little bit, to ensure no side effects, or should I set upload to not for a while? --zyx 03:52, 6 April 2009 (UTC)

I've tested both jars on my system and they seem to be fine. If you want to stick with 1.6.0 I can patch it tomorrow -- I got lazy and only did "ol' reliable" (1.5.4) and the latest stable version. Right now I'm using myself, although I can't get that one to work on my Mac. --Darkcanuck 04:20, 6 April 2009 (UTC)

I tested a fair bit and for a period of time it was what I was using for rumble. And also, like I note above, that version fixes the ITERATE option which has been broken for a long time (it still ran with ITERATE=YES in older versions, but it didn't choose the best bots properly after the first iteration). --Rednaxela 04:17, 6 April 2009 (UTC)

No no, I don't want to stick to 1.6.0, I used as my first rumble client, then read that official versions were 1.5.4 and 1.6.0 so I downgraded to it, so actually is what I'd like to use. When I saw Rednaxela's post above I already decided to switch, I don't use ITERATE, but I still prefer the newest stable version, and since is the version I develop in, even more. My question was related to the patched jar's, sometimes one change affects more than one would like it to, so I asked if you tested it, relatively enough :-p. I will run the patched later tonight, probably with UPLOADS set to NOT just in case, and tomorrow let it upload if ok, or report any weird behavior if I see one. Good job anyways. --zyx 05:20, 6 April 2009 (UTC)

FYI, SVN revision r2352 is the update where it is added. (I think you knew already, Darkcanuck) Actually, I saw only few lines of changes :-) BTW, it's engine for 1.6.2 (AKA melee bugs version) not for old engine. There is a lot change to 1.6.2. Just shame of myself, as I said above to create a patch, but I not even start yet. I don't think you need to patch, as everybody but Darkcanuck and GrubbmGait use (at least after this night) AFAIK, no bot that can run on 1.5.4 or can't run on, or any? If everybody use, I shall release a bot with underscore in version again =D » Nat | Talk » 07:48, 6 April 2009 (UTC)

Zyx, why don't you use ITERATE? David Alves said somewhere that ITERATE is twice faster than using shell script. » Nat | Talk » 07:48, 6 April 2009 (UTC)

Probably because of that, I don't like my processors temperature when ITERATE is on. I have a shell scripts that sleeps after every iteration and it can be set how many roborumble iterations to run per one meleerumble iteration. And I know that ITERATE is much faster because the initializing version check takes quite some time. I have a modified version of RoboRumble, that basically does the same thing but it doesn't upload results(stores to files) and has a Thread.sleep(X), that I use to test new versions of my bots, and that one is faster and I can still sleep after iterations. Although adding the Sleep into the official version would be really simple, I would still be missing my roborumble/meleerumble relation, also Darkcanuck has a bit of fault, since the server is faster it's harder for the processor to cool down :-S. --zyx 08:28, 6 April 2009 (UTC)

I went ahead and patched 1.6.0 anyway -- but this one I haven't tested. The other two have been tested for both one-on-one and melee, and I've been using for two days now. If you find the server too fast, I can increase the upload throttling :) (right now there's a one second delay between uploads) --Darkcanuck 15:14, 6 April 2009 (UTC)

Rednaxela, bot issues are fixed, please verify. Unfortunatly, new bugs discovered. [1] :-( » Nat | Talk » 02:16, 8 April 2009 (UTC)

I think it may be awhile before a stable 1.7.x version is ready. There was never a stable 1.6.2 and 1.7 adds more systemic changes so there will be more bug hunting to come! I might add a "test" mode on the server so basic rumble checks can be done -- let me know if you have suggestions. --Darkcanuck 02:24, 8 April 2009 (UTC)
Maybe you don't know that 1.7.1 has nearly no bug left. It inherit a lot of bug from 1.7.0 that don't reported on SF, and a lot of new bug, too. But, I have hunted more than 50 bugs already (from alpha version)
The "test" mode is good ideas, but will it overload your server? I suggest, if your mysql is fast enough, try adding field stable. Stable result query WHERE `stable` = 1 and "test" result query all of them. Easy? » Nat | Talk » 03:41, 8 April 2009 (UTC)
Even when all the reported bugs are fixed, we will need to spend some time running it to make sure the results are valid. The "test" mode I was considering wouldn't actually store anything to the database, just do the basic data validation checks before throwing away the results. This way you could run a new client and monitor the results. There's already a status flag in the battle results table which could do what you suggested, but I don't know that we need to store the test results. A fairly simple improvement on this plan would be to calculate the difference between the real rumble results and those from the test client, then send this data back to the client. --Darkcanuck 04:18, 8 April 2009 (UTC)
Team Rumble result are invalid right now, that's why those melee bot going into Team Rumble result. But I think RoboRumble and MeleeRumble result are valid now. I'll test by set UPLOAD = NOT in latest 1.7, put result to and let it upload. :-) » Nat | Talk » 04:30, 8 April 2009 (UTC)
Ergh! Sorry, please consider delete all result from Nat_1711 :-( 1.6.2 and up use Survival score but older use place count. I'm very sorry. » Nat | Talk » 04:48, 8 April 2009 (UTC)
Maybe next time don't change your client's version number? The check is there for a reason... --Darkcanuck 06:27, 8 April 2009 (UTC)
I'm not playing. All result is uploaded under new username (Nat_1711 vs. Nat_1614 or Nat). I think you cen delete with one SQL query, aren't you?
But the result from from it is very close to original score. I can't spot any difference except survival. I've a tousand of 1.7.2alpha result save im my machine wait for to upload it :-) Just look at your code, using survival score aren't metter in one-one-one/team since it automatically calculate percent score.
I hope you plan for newer version soon. This version work twice faster than on my machine. But load with pile of java exceptions, too. » Nat | Talk » 06:49, 8 April 2009 (UTC)
I appreciate your taking the time to test 1.7.2, but please don't upload results from 1.7.2 using a client! If there are problems with the results, how can we separate them easily? Removing bad results from the rumble is more complicated than a single SQL query and unfortunately I haven't automated it yet: 1 - the bad result has to be flagged/deleted, 2 - pairing scores need to be recalculated, 3 - ELO/Glicko/APS rankings need to be updated at least once to smooth out the bad data (only APS can be recovered cleanly). When the open issues with 1.7.x are fixed and a new release comes out then we can look into allowing the new version. But for now, please stick to the official versions when uploading. There are over 5 million battles stored on the server, I don't want to search through all of them to find a handful of bad ones! ;) --Darkcanuck 07:15, 8 April 2009 (UTC)

I noticed the message about patching roborumble.jar, so I did, but then I get the following when uploading:

OK. Client version null is not supported by this server! Please use one of these: 1.5.4, 1.6.0,

I tried patched versions of both 1.5.4 and, but I got the same message each time. Beats me what's up with that; for now, I reverted back to the unpatched roborumble.jar (under, for what it's worth). --Deewiant 15:22, 9 April 2009 (UTC)

Sounds like the patched roborumble.jar is working but the game engine isn't returning a version number (just an empty string). Can you try a clean install (just copy over your bot jars and the files under roborumble/)? I think the engine pulls its version number from the versions.txt file, so if it's missing or has been updated then this could happen. --Darkcanuck 07:59, 10 April 2009 (UTC)
Sorry, I should have been more clear: that's exactly what I did for both 1.5.4 and when I first ran into the problem: I grabbed the installer from SourceForge, copied over robots and roborumble/*.txt, overwrote roborumble.jar with the patched one and ran And then I got the error again. --Deewiant 10:53, 10 April 2009 (UTC)
Thanks for the info! I think I just found the bug: in the pre-1.6.2 versions (which is where the patch comes from) there are separate methods for normal battles and melee battles. Looks like I only patched the normal one but missed melee. Expect a new set of patched versions shortly! --Darkcanuck 21:27, 10 April 2009 (UTC)
Just tested the new patch and this problem has been fixed for melee. 1.5.4 and 1.6.0 also have been fixed. You can download the new version using the same link, although you'll probably have to clear your browser cache to get the latest version. --Darkcanuck 22:27, 10 April 2009 (UTC)
Alright, 1.5.4 works for me now, cheers. --Deewiant 10:46, 11 April 2009 (UTC)

Darkcanuck, could you please take a look at robocode released this week? Please test it and report any bug you found, or, in another word, take a decision that is it stable for RoboRumble or not. » Nat | Talk » 06:30, 13 April 2009 (UTC)

I'm away this week but when I get back I'm planning to work on the server a bit more. Once that's done I'll take a look at the new version. And thanks to everyone who's using the patched rumble client, I think we're almost ready to disable uploads from anonymous clients! --Darkcanuck 18:32, 15 April 2009 (UTC)

There is a bug, at least in the patched version. If you have some battle results stored and run the client with EXECUTE=NOT, then I get this message and the results are thrown away.

OK. Client version null is not supported by this server! Please use one of these: 1.5.4, 1.6.0,

I guess it pulls the version number somewhere after executing battles starts or something like that. I guess the jar should be fixed, but anyway I think the server should reply FAIL instead of OK so the results are kept in the client. --zyx 08:20, 16 April 2009 (UTC)

That's a quirk of how the version number is being pulled by Roborumble -- it's a bit odd, but I just copied how it was done in 1.7.1. Not sure how easy this is to fix but you can file it as a bug on sourceforge. On the server side, I always send an "OK" to invalid clients to prevent them from holding on to possibly bad results. For example, someone may run battles with an invalid version, see the error messages and then install a valid one on top -- if the old results are still there, they would later get uploaded with the correct version number and then corrupt the rankings... --Darkcanuck 18:55, 18 April 2009 (UTC)

It looks like all of the contributors in the last 30 days have been using version If there are no objections, I'd like to remove 1.5.4 and 1.6.0 as valid versions so that we're all on the same version. It may also prevent any issues with exceptions from methods not supported in 1.5.4 (although I know there are quite a few more since 1.6.2.*). --Darkcanuck 23:20, 16 July 2009 (UTC)

Sounds good to me. --Voidious 23:44, 16 July 2009 (UTC)
Me too. Just promise that when 1.7.1.x releases ready, you will switch over. » Nat | Talk » 12:43, 17 July 2009 (UTC)
1.7.x will get added once everyone thinks it's ready. But I still have 2 bugs open on (affecting teams and bot property files) and then there's the whole movement issue. The community will decide when a new version is "good enough", not just me. --Darkcanuck 14:24, 17 July 2009 (UTC)
Yep, sounds good to me. And Nat, I'd say and such promise would need to be conditional upon 1.7.1.x having no rumble-affecting bugs. While it looks like it will be good for the movement that Voidious/Skilgannon/Positive are doing a great job collaborating on, some other issue might potentially show up and go undetected in the beta. Of course, that's what testing the beta is for, and I do intend to get back into the robocoding world in time to test it. --Rednaxela 12:54, 17 July 2009 (UTC)


If in melee, both Bot A and Bot B had "0 survival", Bot A AND Bot B get "0% survival" against each other. is it 0 because 0 survival = 0% survival against the other bot, regardless of what the other bot got? or is it because of a 0 / 0 thing? --Starrynte 00:40, 4 April 2009 (UTC)

In a melee battle, if two bots have 0 survival then when the server tries to calculate the survival % for that pairing it becomes 0 / (0+0) for bot A, same for B. The divide by zero protection simply assigns 0 scores to both, although I suppose technically they should each get 50%. --Darkcanuck 05:08, 4 April 2009 (UTC)

Team Rumble

What the heck going in team rumble? What does melee bot from? » Nat | Talk » 20:26, 5 April 2009 (UTC)

Ugh, thanks for pointing this out! That's exactly the sort of thing the patched clients will prevent: they also report the MELEE and TEAMS settings from the properties file. Now if only we can get everyone to adopt them (only 6 uploaders active this month, shouldn't be too hard?) ... --Darkcanuck 03:16, 6 April 2009 (UTC)

  • (off-topic) Which rumble do you want contributor most? Now I run roborumble and meleerumble, with UPLOAD = DOWNLOAD = NOT at night (I usually close my internet at night, but I left my machine run so it don't use SERVER, but GENERAL) and when I awake, I change UPLOAD = DOWNLOAD = YES again. Should I switch to team rumble instead of roborumble? » Nat | Talk » 09:15, 6 April 2009 (UTC)

Melee Rumble

Is something wrong if each time when I run meleerumble it spews out tons of html code? --Starrynte 18:04, 9 April 2009 (UTC)

Sounds odd... can you send me a sample? (jerome at darkcanuck dot net) I don't often run a melee client but I don't remember seeing extra output. --Darkcanuck 07:53, 10 April 2009 (UTC)


Why DrussGT 1.3.6 has 699 pairings while only 699 bots in rumble? It should have only 698 so far (it can't paired with itself) » Nat | Talk » 07:37, 15 April 2009 (UTC)

  • OK, it's going down to 697 now, and loose PL score. » Nat | Talk » 08:47, 15 April 2009 (UTC)
Data in the ranking tables only updates when that bot gets a new battle result. So if there are new bots added or retired from the rumble, it may take a little while for all the existing competitors to fight one battle each and get updated. If you want to use data from the rankings table, I'd suggest waiting until that bot has at least 2000 battles and there has been no changes to the participants list for at least one day. --Darkcanuck 18:03, 15 April 2009 (UTC)

Comparison between robot

I like the new feature for comparison with old version. Can you put at the end of comparison a "total" row and maybe add sorting script like main ranking page? I hope to have a look at server's code in this day. --lestofante 11:54, 28 April 2009 (UTC)

Yeah, I love it! It was the one thing I really missed from the old server, and displaying recent versions with links is very nice. I second the "total row" idea, and listing "best" version among the links might be nice too, but I could live without either of those. I know I'm late to the party, but your RR server is really sweet, major thanks from me for all your hard work. --Voidious 13:50, 28 April 2009 (UTC)

Really good job, I used to save the page of my old bots and compare them in Excel. For the new features proposed, I like the sorting idea the best. Great work man. --zyx 14:57, 28 April 2009 (UTC)

Thanks! Sorting is already enabled, it works just like the other tables -- you may need to reload the page or clear your browser cache to update the javascript? Will a totals row really help more than the average % score and survival at the top of the page? --Darkcanuck 03:06, 29 April 2009 (UTC)
The sorting indeed works fine for me, nice. The % score and survival are not limited to the bots they have commonly faced, that's why I'd still find myself calculating the total from the table. (Especially before the new one has all its pairings, yes I'm that impatient. =)) It's no biggie for me to copy/paste into Excel for that (as I've been doing for however many years), but just FYI that's why it could be different. Honestly I feel guilty even mentioning more bells and whistles, but since you asked... --Voidious 03:28, 29 April 2009 (UTC)
So really what you want is an APS & avg. survival for common pairings only, correct? I could put that in the summary table at the top... --Darkcanuck 03:36, 29 April 2009 (UTC)
Yep, that would be the same for my purposes. Thanks dude! --Voidious 03:41, 29 April 2009 (UTC)
Try it out... ;) --Darkcanuck 03:50, 29 April 2009 (UTC)
Wow, you're quick! Awesome, thanks again. =) --Voidious 3:57 26 April 2009 (UTC)

Very nice! I've been missing this! Now, I don't want to sound ungrateful or anything, but I had an idea that would help comparisons even further: if there was an equivalent of an ELO graph that runs off the expected score and the diff, so it's easy to (graphically) see where you lost or gained points on a version, against strong or weak bots. I'm not sure if you would be able to just feed the graph software different data, or if you need to go in and make a copy which you could adapt to pull different data, but I'm fairly sure it's a feature which would see good use! --Skilgannon 15:49, 30 April 2009 (UTC)

Now you want a graph?!? I think you'll have to call ABC out of retirement to look into this -- my javascript skills are quite limited... ;) --Darkcanuck 16:17, 30 April 2009 (UTC)

Probable bugs

I've now gotten 5 different crashes as I've been running the melee battles over the last 24 hours on 2 different systems. 2 of them were out of memory failures, 1 battle thread exception, and 2 illegal awt something or anothers. The common thing I noticed was robot Justin.Mallais 10.0 running in each group. That robot also takes my system down to a crawl while running. Anything else I can add to help you out? --User:Miked0801

No, this is not a server bugs. The out of memory failures mean that you set the java heap size too low, try -Xmx512M or -Xmx1G instead of default -Xmx256M and try again. The battle thread exception should be reported on sourceforge tracker. The awt thing is sometimes happen, but I don't think it make the client crash. If it does crash, report it on the tracker, too.

In case you don't know, sing your comment with --~~~~, it will automatically link to your user page with a nice timestamps. » Nat | Talk » 15:41, 29 April 2009 (UTC)

Hey, you might like to know (if you didn't notice) that the RR client now has the option to exclude certain bots or packages (set in the ...rumble.txt file). I haven't played with it much, but I have been tempted by SlowBots in the past =), and this sounds like a good situation for it. Not that this precludes the existence of bugs to be fixed in the RR client. But on that note, I think FlemmingLarsen may handle the RR client code, while Darkcanuck just setup a new server for it to point to. --Voidious 15:44, 29 April 2009 (UTC)
Yep, I only modified the RR client so that it sends the version to the server -- bugs should be logged at sourceforge for Fnl and Pavel to look into. The default melee memory setting is definitely way too low and really needs to be at least 512M as Nat pointed out (this has been fixed in later, unstable versions). My client runs fine with this amount, but I don't use that computer for anything else... works fine although it has the unfortunate quirk of sending tons of output to the console, including occasional awt exceptions (which don't crash the client or seem to affect results). --Darkcanuck 16:27, 29 April 2009 (UTC)
Is there a better place on teh wiki for client bug discussions then? BTW, changed my memory settings and am testing now. --Miked0801 16:34, 29 April 2009 (UTC)
RoboRumble/Reported_Problems is the best place to start if it's not clear whether you're seeing a problem with the server, client or a specific bot. There are links to this area plus the sourceforge tracker too. --Darkcanuck 16:38, 29 April 2009 (UTC)
A quick update on the AWT thing. It happens 100% of the time when I start my Internet Explorer browser while running the game in the background. It also happened when Outlook sent me a meeting reminder. But on the server side, is there anyway to make sure that all unpaired robots can take priority when being selected for random battles? I've gone nearly 800 nano battles and have yet to get my last pairing (and have only hit 1 other robot once.) I've also noticed overall that many pairings have yet to occur on bots with over 5000 battles complete in general melee. This might be a random number/selection bug, or it might be bad luck. Either way, this should probably be nudged to help out the ranking integrity. Especially when I've battled other bots over 20 times. --Miked0801 23:34, 29 April 2009 (UTC)

I've been looking at pairings more closely recently and can tell you this much:

  • the server always reports missing pairings to the client on every upload (but only for the two bots in the pairing, limited to 50 pairs).
  • the client doesn't actually pay attention to this data until a bot reaches the BATTLESPERBOT number (usually 2000); until that point pairings seem to be chosen randomly. This is incorrect, I noted a quirk below that causes pairing completion to take longer than expected.
  • there's definitely something funny going on with melee and I'm not sure how the client puts together 10 bot matches. The server should be doing the same thing as for 1-on-1 but maybe the client doesn't use it?
  • I've seen (and others have reported) the client get stuck on one pairing, running it over and over...

--Darkcanuck 00:40, 30 April 2009 (UTC)

I just peeked at the client source and it looks like melee doesn't use "smart" battles, so it's completely random... --Darkcanuck 00:46, 30 April 2009 (UTC)

Ok, I did some further digging and managed to patch my client to use priority battles in melee, so the missing pairings should start to sort themselves out soon. I'll make it available once I'm sure there's no bugs. But I also found that the way the client stores these pairings can lead to the same pairing being run over and over again -- especially in melee. In order to work around this problem, I've updated the server so that the missing pairings are sent to the client in a somewhat randomized fashion. This should help speed up the rate at which pairings are completed in all categories.

The Survival 0/0 = 0 bug is kinda annoying as well. Every now and then a melee battle accurs with one of the melee gods and none of the nanos survive. Seeing a 0% survival freaks me out. :) --Miked0801 23:52, 30 April 2009 (UTC)

How to Enter

How do I enter? I have a decent nano I want to try. --Awesomeness 21:55, 6 May 2009 (UTC)

  • include your bot on the participants page (RoboRumble -> Participants 1-v-1 or melee) and it will automatically get its battles on the running clients. See also RoboRumble -> Enter The Competition. Good luck! --GrubbmGait 22:19, 6 May 2009 (UTC)
Okay, I did... Do I just wait now? --Awesomeness 00:02, 7 May 2009 (UTC)
(edit conflict) Looks like you got it! Clients only refresh the participants list every 2hrs, so it may take at least that time for a new bot to show up in the rumble. 550 battles and climbing... with all the processing power running clients recently, Elite 1.0 should be at 2000 battles in no time! --Darkcanuck 00:41, 7 May 2009 (UTC)
Yep, your bot will get battles from those of us running a RoboRumble client. If you want to contribute battles yourself, check out the RoboRumble/Starting With RoboRumble instructions. There are a few things to note, though (that should be added to that page):
Once that's all set, just run or roborumble.bat. Running a client is not required, but definitely appreciated if you're entering bots, and you won't have to wait as long to get a stable rating. =)
--Voidious 00:39, 7 May 2009 (UTC)

What's the problem?

Hey Darkcanuck, what's the problem? I can't find any problem with my client... » Nat | Talk » 08:53, 16 May 2009 (UTC)

Ok, ok. Just found that some roborumble result injected into resultsmelee.txt, weird... Fixed now. » Nat | Talk » 09:18, 16 May 2009 (UTC)

And, please unblock me soon, 4,613+ battles are waiting! (and my clients are running with UPLOAD=NOT) I've fixed all issue with my result file, anyone know how it happen? my resultsmelee.txt are injected by roborumble result and a LOT of whitespace. Actually I'm much appreciate you blocked me, or what will happen if my client reach the whitespace (around 1000 TABS characters)? » Nat | Talk » 13:39, 16 May 2009 (UTC)

Ok, but I think you should stick to the 1v1 rumble until we can figure this out -- I'm going to keep the meleerumble block active. Are you using the iterate feature? --Darkcanuck 20:10, 16 May 2009 (UTC)

Thank, but it would be better if you unblock melee and still block one-on-one because I run melee rumble as my main client (those battle are 90% melee and 10% one-on-one). Yes, I use the iterate feature (I hate initialize version check date) » Nat | Talk » 02:12, 17 May 2009 (UTC)
Well that's 4000+ possibly suspect battles... I don't feel like cleaning that up if there are more bad results. If you separate your melee and 1v1 installs and delete all saved results then we can turn this back on. --Darkcanuck 06:03, 17 May 2009 (UTC)
OK, my roborumble client is moved to my harddisk without any data transfer (blank result file) and my computer just crash around half an hour ago so I can say that no suspect battles left (lost 9000+ battles this time) But all are clean :-) » Nat | Talk » 06:14, 17 May 2009 (UTC)

Do you have separate installs (e.g other directories) for melee and one-on-one? If not, it is strongly advised not to run melee and one-on-one at the same time. --GrubbmGait 23:02, 16 May 2009 (UTC)

No, I'm not using the separated installations (not enough space in ramdrive). :-) » Nat | Talk » 02:12, 17 May 2009 (UTC)
Now I moved my one-on-one client back to hard disk, ramdrive now use for melee only. » Nat | Talk » 02:20, 17 May 2009 (UTC)
When running from the same installation, if in melee AND one-on-one the same bot is running a battle simultaneously, you get strange stuff. Same with running a client and developing at the same time in one installation. One installation should handle one thing at the time, so use separate installs although in your case this means a less convenient way. --GrubbmGait 09:26, 17 May 2009 (UTC)
Now it separated =) » Nat | Talk » 11:24, 17 May 2009 (UTC)

Well I can't really be sure, but the weird data and blanks injection sounds like a bug in the ram disk implementation to me. I haven't seen anything that could cause that in the rumble client's code nor have I heard of anyone having that issue before, but I feel that a small pointer related bug in a ram disk implementation can easily cause that behavior. --zyx 07:10, 17 May 2009 (UTC)

I don't think it is only ramdisk fault. I think it both Java and ramdisk fault. Anyway, I use separate installation (yet being synchronized) no. » Nat | Talk » 07:17, 17 May 2009 (UTC)

Have you unblocked my melee client? After cleaning the result file (accidentally actually), my client running again, now at iteration 15 and 2000+ battles waiting.


  • My roborumble client is at C:\roborumble while meleerumble client is at R:\roborumble (ramdisk)
  • My melee result file was cleaned accidentally by a computer crash.

» Nat | Talk » 08:36, 17 May 2009 (UTC)

Iteration 34, 4500+ battles, please! (I think you are sleeping) » Nat | Talk » 11:24, 17 May 2009 (UTC)
Ok, done. :) --Darkcanuck 15:09, 17 May 2009 (UTC)
Thanks, sorry if my client upload results for Diamond 1.01/1.02 » Nat | Talk » 07:14, 18 May 2009 (UTC)

Hey DarkConuck,, My appologies... It seems my client uploaded some crap to the server... I don't know why.. I follow the directions above ( vr / with robocode patch, and changed urls, ) and the install was meant for melee rumble only.. If you have any suggestions I'll use them, otherwise I have no prob waiting for the fool proof version . (It would be nice to one day run the roboRumble and enter a bot via the drop down menu in robocode)... Pls unblock me so I can view the rankings, and I will no longer attempt to run the melee rumble unless you have suggestions.. Thanks -Justin

No worries. I'll unblock your uploads shortly, but I fixed the bug that also blocked you from viewing the rankings -- that was unintended. I think your client was using the 1v1 participants url (that's what was posted above), but for melee you should be using This is in the meleerumble.txt file of course, which should be used by running meleerumble.bat/sh. Also important to have MELEEBOTS=10 in that file too. --Darkcanuck 22:32, 26 May 2009 (UTC)

Yes, I was using the wrong list.. :( I changed it... (MELEEBOTS=10 was ok) If you wanna Give me the go ahead I will try again if you like... (though I imagion I should open up the resultsmelee.txt and delete everything in there) right? -justin

Before restarting your client, delete all of the files in the roborumble/files and roborumble/temp directories. This will give you a fresh start. Also try running just one iteration first and check that your client is working ok before letting it run unattended. I'll try to have a look in a few hours to see how things are going. --Darkcanuck 23:34, 26 May 2009 (UTC)
And check out the pages User_talk:Jlm0924 and Talk:Mallais -- people have questions for you! --Darkcanuck 23:38, 26 May 2009 (UTC)

Hey Darkcanuck - I'm not sure if this is just spillover from the previous issue, but I figure better safe than sorry... I see some results from your client (or using your name) in the melee rumble with bots that shouldn't be there: [2] and [3], for example. --Voidious 01:53, 27 May 2009 (UTC)

It's related to the bots that Justin sent results for. My client is patched to do smart battles in melee, so the server keeps sending missing pairings for these bots (which the client naively runs). And the server no longer removes bots until at least 4hrs after a bot's last upload so the priority battles keep them in play. This loop will be broken once the bots (3, I think) reach full pairings. Normally I'd just stop my client for 4hrs or so, but I'm out of town and can't access that machine... --Darkcanuck 04:49, 27 May 2009 (UTC)

Hey Darkcanuck - I've encountered a strange anomaly with Diamond 1.24 in the MeleeRumble after my client removed it. Diamond 1.24 details and Diamond 1.242 vs 1.24 are missing all pairings for Diamond 1.24. A few other old versions (of Diamond and otherwise) I tried all seem fine, so I can't offer any more clues as to what the problem might be. Thanks for re-enabling removals, btw, and I dig the swanky new menu bar. --Voidious 07:13, 2 August 2009 (UTC)

Hmmm, seems like 1.24 was retired twice, which made its stats disappear (they're still in the database, just harder to sort out). It looks like a concurrency problem -- I have a potential solution but it's too late here to wrestle with it now. I'll take a look tomorrow and try to get this sorted out. --Darkcanuck 08:17, 2 August 2009 (UTC)
Should be fixed now -- let me know if you see any problems with future updates. Fixing 1.24's data is not done yet, as it's somewhat tricky. --Darkcanuck 22:58, 2 August 2009 (UTC)
Cool, thanks! Don't worry about fixing 1.24 on my account, I don't particularly need those details... --Voidious 23:14, 2 August 2009 (UTC)

Sorry, I guess I'm good at this... I reverted Diamond to 1.241 earlier today (not the problem version above), and that seems to have caused another problem. Every attempted upload gets something like this:

Fatal error: Call to a member function checkState() on a non-object in /users/home/jlavigne/domains/ on line 147 Unable to upload results roborumble,35,800x600,Voidious,1249508467470,SERVER voidious.Diamond 1.241,4588,2073,35 dks.MicroDanMK2 1.0,15,15,0

And since it doesn't get its battles, it just keeps running battles for Diamond 1.241. Sorry to be such a troublemaker! Feel free to revert Diamond to some other version for now as a stop-gap measure, if that will help. Edit: Just realized that the obvious and simple stop-gap measure would be to remove him.

--Voidious 21:56, 5 August 2009 (UTC)

Ok, that was a lame bug from fixing the concurrent removal issue -- didn't test reactivation. Re-entered Diamond 1.241 and it seems to be ok now. Nice job finding these bugs! =) --Darkcanuck 06:35, 6 August 2009 (UTC)

Hit this one again, sorry. =( Diamond 1.26 [4] got its pairings zeroed out like 1.24 did. As with 1.24, I don't particularly care about restoring the data, just letting you know about the bug. Given the timing and the "what's changed?" aspect of troubleshooting, I should mention that I left my "RumbleWatcher" Twitter script running over night, once per hour, though it has only done 10 URL accesses to the RR server spread over the last ~15 hours and seems an unlikely source of the problem. --Voidious 17:26, 9 August 2009 (UTC)

Another double retirement, the check I added wasn't enough. Time to try table locking again -- I hate this problem... --Darkcanuck 19:52, 9 August 2009 (UTC)
I did add some table locking before your last Diamond update and it seemed to go smoothly. Since you're probably going to keep updating this bot, let me know if you see any other issues. Also, I found a duplicate for 1.261 and marked it as such -- the database will no longer allow this to happen either. Looks like a lot of clients were running today. --Darkcanuck 04:52, 10 August 2009 (UTC)

Accel/Decel rules

Discussion moved to Talk:Robocode/Game_Physics

Priority Melee Battle

Darkcanuck, would you mind post a patched version of RoboRumble.jar that has priority melee rumble? » Nat | Talk » 11:07, 2 August 2009 (UTC)

Ok, the patched jar has been replaced with the version with priority melee battles. --Darkcanuck 22:59, 2 August 2009 (UTC)

Bad results

Hey Darkcanuck, would you be able to remove the two results [5][6] that got uploaded by RednaxelaIPod? Sadly, this experiment didn't quite work out, sorry about the bad results =\ --Rednaxela 03:49, 7 August 2009 (UTC)

Does that really say "ipod"?! I guess this means I finally have to work on the recovery routines I sketched out somewhere... --Darkcanuck 05:21, 7 August 2009 (UTC)
Yep, aaaalmost got roborumble sucessfully working on my ipod touch, but I foolishly didn't disable uploads before checking that the battles were actually running successfully, sorry about that =\ --Rednaxela 05:28, 7 August 2009 (UTC)
Ok, I wrote some routines to semi-automate this chore and they've been cleaned up. Also removed the 18 "Nat_1711" battles. Flagging the bad ones involves some quick SQL in the database; the rest is a nice point & click interface to remove the offending battle and then rescore the pairing (rankings will update next time that bot gets an upload). --Darkcanuck 07:00, 11 August 2009 (UTC)

Look's like there might be another batch of bad battles... do you mind removing this? And is there any way we can look at the last X battles uploaded by a certain user? It would make looking for bad results much easier. --Skilgannon 19:36, 11 September 2009 (UTC)

I'll look into it, but my policy is only to remove battles that are definitely client problems. With only one bad result it's hard to say if it was the client or the bot, but with so many new bots in the rumble recently I would have expected to hear about more problems if it was the client... Definitely we need a better way to find patterns of bad results. Maybe turn the client info on the battle details page into a link that would report the last X battles sent by that client? Then from there you could jump to each battle detail list and see if there are any outliers. Keep in mind that at 12.5 million records (and counting) the battle results must be queried carefully to keep uploads humming along. --Darkcanuck 06:12, 12 September 2009 (UTC)

I was talking with Synapse here and he can't reproduce that result on his own system. I suspect that spinnercat might have run something CPU intensive at the time of that battle, causing Geomancy to skip a lot of turns or be disabled. And how are battles ordered in your database? Are they just sequentially added at the end? Because if so it wouldn't be very CPU-intensive to traverse backwards through the log until you have X battles by a certain uploader. --Skilgannon 08:30, 12 September 2009 (UTC)

I see one bad battle of Diamond 1.443 vs MagicD3 0.41: [7]. (For reference, vs Diamond 1.442: [8].) Looks like a new RoboRumbler, "rael". No idea if this is a problem with rael's client or with MagicD3, but I figured this was the place to post about it... --Voidious 03:37, 9 October 2009 (UTC)

Another vs Gaff: [9]. --Voidious 04:39, 9 October 2009 (UTC)
More... Clearly the client. [10], [11]. --Voidious 04:43, 9 October 2009 (UTC)
Thanks for spotting this. Checking the database it looks like at least 10% of battles (if not more) from this client have a 400-0 score, which is quite odd. I've temporarily suspended "rael"s uploads and it looks like I have 600+ battles to clean up - ugh... --Darkcanuck 05:39, 9 October 2009 (UTC)
Would it be easier for us to post re-releases of the affected bots? If it didn't run any random battles (when everyone's over 2,000), that would be simple enough and could save you some work. --Voidious 12:48, 9 October 2009 (UTC)
It's up to you -- I'll probably clear them all out anyway. I've left them in the meantime in case we need to show rael, but so far no response... --Darkcanuck 15:22, 9 October 2009 (UTC)

Hi mate. I got yesterday some unusual rumble results. I released a new version but there might be other bots affected as well. Would be nice if you could remove the contributions from Stelokim around this time. This is just one example. I got a lot of more with other bots as well.

Wallaby 3.9 vs Capulet 1.2

Thanks in advance.


Hey Darkcanuck, I just went to your sever to check the Twin Duel rankings and didn't find a link on the main page, went to 1v1 rankings and navigated to it with the nice menu bar, then went back to home, and there was a link to twin duel, this means that [12] uses a different page than [13] and that's probably not your intention, just wanted to let you know. Keep up the good job. --zyx 04:20, 14 August 2009 (UTC)

They're the same page here. Perhaps it was your browser caching the page Zyx? --Rednaxela 04:23, 14 August 2009 (UTC)

If you see the same on both, then it probably was a caching issue, browsers should very explicitly warn when they don't fetch a fresh copy of a page. --zyx 04:26, 14 August 2009 (UTC)
They both point to the same page. After the update, I had to refresh to see the new version (the browser had cached the old copy). The browser expects the web site to tell it when to not cache, but I'm too lazy to put that in. With the new menu bar though, the old "home" page is now very redundant... --Darkcanuck 04:29, 14 August 2009 (UTC)
Hmm... about the old "home" page being redundant... Maybe there should be a rumble "Home" page which pulls in content from roborumble twitter, rr server updates, and maybe recent edits to wiki discussion pages about roborumble? Not needed, but just some thoughts about what might actually belong on a 'home' page for it. --Rednaxela 04:35, 14 August 2009 (UTC)
Anything to delay that latest gun of Gaff's from entering the rumble. =) Adding the Twitter feed would be easy, let me know if you are interested. Pulling content from the wiki would be a little more work, but VoidBot could still do it pretty easily. But seriously, I want to see that gun blowing away surfers in the rumble, so maybe we can wait a bit before more rumble server-related requests. =) --Voidious 13:59, 14 August 2009 (UTC)
Are you going to delay it or fasten it? =P » Nat | Talk » 14:08, 14 August 2009 (UTC)
Thanks, but I think you're overestimating the power of AS targeting in the rumble. =) The last update, which took Gaff to #1 in the TCK2k7 fast-learning challenge, barely moved his ranking in the rumble. Movement is my weakness. And Holden's problem bots are mostly mirror movers and rammers, not top bots. Now if a ton of bots entered the rumble with guns similar to Gaff's then maybe the bots at the top would have something to worry about... --Darkcanuck 14:29, 14 August 2009 (UTC)
Well, I think Gaff could certainly rise up the PL ranks quickly with the kind of gun it has. I kind of wonder how the PL ranking would be if you got a flattener-only high quality surfing working... --Rednaxela 14:42, 14 August 2009 (UTC)

"Lock wait timeout exceeded" errors

Hey Darkcanuck, I'm running the rumble right now and am noticing errors like the following:

FAIL. Cannot execute query UPDATE game_pairings SET state='1'
				WHERE  gametype = 'X'
				  AND  (bot_id=1260 OR vs_id=1260)  AND state='R' (MySQL error: Lock wait timeout exceeded; try restarting transaction)
Unable to upload results roborumble,35,800x600,Rednaxela,1251081148611,SERVER rsim.micro.uCatcher 0.1,2346,245,34 0.2,653,568,1
FAIL. Cannot execute query UPDATE game_pairings SET state='1'
				WHERE  gametype = 'X'
				  AND  (bot_id=1260 OR vs_id=1260)  AND state='R' (MySQL error: Lock wait timeout exceeded; try restarting transaction)
Unable to upload results roborumble,35,800x600,Rednaxela,1251081483323,SERVER 0.2,2398,836,24 oog.micro.SavantMicro 0.31,2044,1220,11

--Rednaxela 02:48, 24 August 2009 (UTC)

And now getting a 503 error

Service Temporarily Unavailable

The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

--Rednaxela 03:06, 24 August 2009 (UTC)

Yeah, I noticed a high load earlier today too. The same thing happened last Sunday and I ended up taking the service down then for a few hours. Seems back to normal now though. I've also noticed that Gaff 1.42 was reactivated -- looks like either your client or Simonton's? Has anyone done anything unusual today, like start up a new client or one that's been idle for over a week? --Darkcanuck 06:06, 24 August 2009 (UTC)

Perhaps it's Simonton's client who has problems. This should not happened, unless Red has some serious bugs in new RougeDC » Nat | Talk » 09:50, 24 August 2009 (UTC)

Yeah I noticed that weird result too. I've tried a lot but can't get anything remotely close to that no matter how I try. --Rednaxela 12:47, 24 August 2009 (UTC)

Bug in other versions list?

Look here, it says that other versions are 3.84a, 3.84 and 3.83c. However, Shadow 3.83c didn't participate in melee rumble, just roborumble. And if you click the Shadow 3.83c link it will yield "ERROR: Invalid robot name "abc.Shadow 3.83c"" [14] » Nat | Talk » 14:03, 1 September 2009 (UTC)

OK, it isn't visible anymore, but it is indeed a bug. Please fix this. » Nat | Talk » 15:21, 2 September 2009 (UTC)

Or maybe it working again was him fixing the bug? :) --Rednaxela 15:27, 2 September 2009 (UTC)
I don't think so, just that Shadow 3.84c push the Shadow3.83c off the list =) » Nat | Talk » 15:34, 2 September 2009 (UTC)
(edit conflict) Oops, I meant to reply to this... This bug is already on my list -- the query done to show old versions doesn't take into account what games those versions were used in (it was the easiest way to implement at the time). So for bots like Diamond and Shadow, different versions might make it into the 1v1 or melee rumbles, but all will be shown when you look at the details page. Not fixed yet as I'm in bot-programming mode rather than server-tweaking mode at the moment. --Darkcanuck 15:36, 2 September 2009 (UTC)

Show top 20 only

I've noticed that when I'm curious about the rankings I usually only look at the top-20 or so. It might both decrease your server load and page loading time if you could add like a show-top-20 only version of the rankings for people to use. Or alternatively, that you select a robot to see the rankings for and see the 10 bots above and 10 bots below it (or something). What do you think? :) --Positive 16:56, 4 September 2009 (UTC)

Hmmm, the top-20 should be fairly easy to implement, maybe I'll try it this weekend. My only concern is the UI -- when do you show the top 20 and when do you show the rest? I think you're in the minority of users, most people don't have bots in the top-20 and would want to see the full rankings. One possibility would be a summary page for each game type with the top-X details plus the leaders in APS, survival and PL. But it would mean an extra click to get to the full rankings... --Darkcanuck 20:22, 5 September 2009 (UTC)
That's true. You could also do it the other way around, an extra click to get to the limited rankings. I wouldn't mind, I'd just bookmark it. :) --Positive 17:04, 6 September 2009 (UTC)
Any of these ideas would be pretty simple to do with the Query API, too... I'm out of town through today, but I'll gladly try creating the "neighborhood" ranking view if Darkcanuck doesn't get around to it this weekend. (Tomorrow is a US holiday, too, so I'll have time.) --Voidious 17:50, 6 September 2009 (UTC)

Accidentally uploaded some dev. code for this today -- unplanned, but it should work. Try I'll make a nicer summary when I have a bit more time. --Darkcanuck 15:37, 11 September 2009 (UTC)

Seems to be working nicely, I tried several values for the limit parameter, also tried it on roborumble instead of meleerumble, had no problems and it loads very fast. Good job, and nice accident. --zyx 17:59, 11 September 2009 (UTC)
It works great. This is exactly what I wanted, thank you! :) --Positive 21:33, 11 September 2009 (UTC)
Excellent =) Maybe have a drop-down menu to show the top X in multiples of 20? --Skilgannon 21:53, 11 September 2009 (UTC)
Very nice, I shall bookmark this for my ipod touch... since the huge table normally makes it slooooow... :) --Rednaxela 01:27, 12 September 2009 (UTC)

Melee Stress

Whenever the melee uploads heat up... the server always seems to respond slowly, and sometimes not at all. And even without that, uploading battles is what takes half of the cient's time up. Lately I'm often noticing my client completely lock up waiting for a response from the server that never comes, hmm. --Rednaxela 13:14, 21 September 2009 (UTC)

Melee generates 45 uploads per battle, so the server really can't handle the same load as for 1v1 (1/battle). Plus Sundays seem to be particularly slow, likely due to other activity on the server. So firing up many melee clients doesn't really help speed up the results, it just slows down response time for everyone... --Darkcanuck 14:16, 21 September 2009 (UTC)
Sometimes it can generate 180 uploads per battle... » Nat Pavasant » 14:34, 21 September 2009 (UTC)
Yes, an all-nano battle will get up to 180 uploads. --Darkcanuck 15:22, 21 September 2009 (UTC)
Hm, would it help if the uploads were batched per-battle or is the bottleneck CPU? I wonder, is much of the numeric processing being done in PHP? --Rednaxela 15:10, 21 September 2009 (UTC)
I think batched uploads will help a bit (would like to do this in a version 2 protocol) but retrieving each of the pairing details (not as bad as 1v1, since the competitor list is half as big) and recalculating ELO/Glicko2 is the biggest performance hit. Oh, plus the upload throttle (0.5sec) to keep you guys from obsessing over client throughput. All processing is done in PHP since it was too complicated/slow to do it in MySQL. While there's probably room for improvement, a better solution would be to come up with a more efficient scoring method for melee, rather than treating each matchup as a 1v1 result. --Darkcanuck 15:22, 21 September 2009 (UTC)
MySQL being slow? A well-designed stored procedure should be at least 10x faster than PHP processing unless the SQL server is absolutely horrid. Hmm... well I do think that the nicest scoring system would be a Concodert voating algorithm, however it should equally slow as treating every match as a 1v1 result. I don't think we get away with less processing unless we switch to a system with horrible sensitivity to the specific bots chosen in the random battles. I think the key thing that needs addressing is getting the processing out of PHP, which almost surely makes the processing at least 50x slower than it needs to be. --Rednaxela 15:32, 21 September 2009 (UTC)
What if the it just store result in database, and have another fast C/C++ program execute every 5 minutes update all score and clear cache for the ranking page. This way uploading will be just one insert statement (plus table locking) and viewing page will be just reading from cache. And, just wonder, is the server hosted in real server environment or just your home computer? From the code, I think it is your home computer, but I'm amazing that it is able to accept this much loads, normally with this load you may need clustered MySQL servers =) » Nat Pavasant » 15:38, 21 September 2009 (UTC)
Wow, apparently mysql stored procedures REALLY suck. I take back that about them being faster than PHP. I said that because I couldn't imagine something being slower than the terrible slowness of PHP. I still believe that getting processing out of PHP is the key thing that needs to be addressed though. --Rednaxela 15:48, 21 September 2009 (UTC)
I still think that uploads are "fast enough" (except on Sundays). Until I beat MirrorMicro I don't really have much time to dedicate to this. As for PHP processing, I remember when trying to speed things up way back, the queries were the bottleneck. It's open source if you want to take a look... ::::: Nat, the server is definitely not on my home computers. --Darkcanuck 16:00, 21 September 2009 (UTC)
Hmm... with query bottlenecks... I think the 'correct' fix then would be something like having a memcached install on the same server as is running the PHP --Rednaxela 16:07, 21 September 2009 (UTC)
Nope. If the queries were even slightly repetitive then this would help, but all of the slow queries are only valid once -- the data changes immediately afterward. This application defies the standard high-read/low-write of most webapps; it's definitely very-high-write/low-read. --Darkcanuck 16:26, 21 September 2009 (UTC)
In that case.... If wonder if using a sqlite database directly on the machine running the PHP would improve performance. The database size/load may be too much for sqlite... but if it's not, then maybe it being on the same box (and not being subject to sunday cron jobs clogging the database server) would improve performance dramatically. --Rednaxela 20:04, 21 September 2009 (UTC)
Sqlite doesn't allow concurrent writes which would really slow things down. I broke an old bottleneck many, many months ago by moving the pairings table to InnoDB so that conncurrent uploaders could write to this table. Hmmm, perhaps all the melee activity is causing a bottleneck on the participants table? That one is still MyISAM and should be easy to convert. --Darkcanuck 20:17, 21 September 2009 (UTC)
True, Sqlite doesn't allow concurrent writes which is a disadvantage, but I wouldn't write it off for that. I believe the main reason concurrent writes were of significance before was because the database latency/speed was so bad in the first place, and that it's quite possible that the speed gained from localhost sqlite may be fast enough that lack of concurrent writes would cease to matter so much. If changing the participants table doesn't fix things, I may be tempted to try out porting the server to Sqlite and testing that out locally when I come across some time. --Rednaxela 00:22, 22 September 2009 (UTC)
Your $isDebug = $_SERVER['REMOTE_ADDR'] == ''; or something like this make me think it is on your home computer, or a server hosted at your home. » Nat Pavasant » 16:21, 21 September 2009 (UTC)
I run the dev. server on my home machine. Errors are normally suppressed but I need to see them for debugging. --Darkcanuck 16:26, 21 September 2009 (UTC)
By the way, if a bigger or dedicated machine for the RR server would be helpful, I'd certainly be willing to pitch in money for that. I know it would still be a lot of time and effort to setup, migrate, maintain, and then the cost of power... I'm not saying I think this is the best solution, just throwing the offer out there if it would be a good solution for you. --Voidious 15:54, 21 September 2009 (UTC)
It's not using a VPS, but I think I have more resources at my disposal than the wiki setup. Thanks for the offer, but I think upgrading to a bigger platform would be prohibitive, especially in terms of time. --Darkcanuck 16:00, 21 September 2009 (UTC)

This would be a controversial solution, since all existing melee bots are optimized for 35-rounds, but what if we switched to longer battles (say 100 rounds)? Theoretically, you'd get more accuracy per uploaded battle result (and client iterations would take longer). I'm sure a few stragglers would start hitting performance issues due to longer battles, and of course this would favor learning bots... --Voidious 18:48, 21 September 2009 (UTC)

I'd be in favor of that if and only if it was done with a new participants list that was started from scratch, as to avoid hurting bots not made for it. But... then we'd only gain server resources if the old melee rumble was discontinued or at least deprecated. --Rednaxela 20:04, 21 September 2009 (UTC)
I second Rednaxela. » Nat Pavasant » 10:59, 22 September 2009 (UTC)

ELO Rating

I wonder what is happen with the ELO rating. This rating center around 1600 so bot with over 50% APS should get more than 1600, shouldn't them? » Nat Pavasant » 11:06, 22 September 2009 (UTC)

One more bad result

I was telling Voidious about this bad battle. It looks like a one-off occurrence; while it seems kind of insignificant, it makes that version of Horizon show up as having a higher score against Dookious than even Shadow. Seeing how unlikely it is that Dookious actually crashed, can you remove the battle? « AaronR « Talk « 21:44, 1 October 2009 (UTC)

Results unstable

I noticed that there are 9 bots in the RoboRumble with too many pairings right now (aka, "results unstable"). For instance, stelo.PastFuture 1.2a details lists two versions of Diamond, 1.443 and 1.47. The last battle against 1.443 was 5 days ago, so it seems like that should've been removed from its details by now. Is this a bug, or just some part of the process that I'm misunderstanding? --Voidious 17:01, 20 October 2009 (UTC)

I've slowly been rescoring pairings for the 1000 battles uploaded by rael, so I'm probably the cause. I've found a few bugs in this process (now fixed) but there may be a few rogue pairings that aren't properly retired. Thanks for the heads up, I'll see what I can do to fix those. Hopefully it's not a different bug... --Darkcanuck 02:15, 21 October 2009 (UTC)

Database lock error

Just to let you know, Darkcanuck

Removing entry ... ags.Glacier_0.2.6 from meleerumble
FAIL. Cannot execute query LOCK TABLES participants WRITE, participants AS p WRITE,
                                    bot_data WRITE, bot_data AS b WRITE,
                                    game_pairings WRITE, game_pairings AS g WRITE (MySQL error: Lock wait timeout exceeded; try restarting transaction)

Don't know if this will continue or if it's a significant problem. --Rednaxela 02:27, 30 October 2009 (UTC)

Seems to be fixed, nevermind :) --Rednaxela 02:29, 30 October 2009 (UTC)

Missing Details of Retired Bot

You might want to take a look into this =) --Skilgannon 21:46, 7 November 2009 (UTC)

I did that... =) Most of the battles submitted by "rael" affected this version of Toorkild, so I just wiped them out instead of the long, tedious score rebuilding. I knew you had re-released anyway. I should probably scrub clean the remaining battles too. --Darkcanuck 22:19, 7 November 2009 (UTC)

'Stabilization Battle' Not Priority?

My client is now running completely random pairings, but I see that there are still several bots in the rumble sitting on 745 pairings. Shouldn't the server be telling my client to run these bots so their rating can stabilize? --Skilgannon 13:35, 11 November 2009 (UTC)

Hmmm, voidious pointed this out a while back and looks like I forgot to clean those up. They're related to the "rael" battles, a bug in the rescoring reset 9 retired pairings to active status. I've updated the offending pairings so when those 9 bots get another battle they'll go back to 744 pairings again. Thanks for reminding me! --Darkcanuck 23:20, 11 November 2009 (UTC)
It seems like all of the micro rumble is also affected, except for Toorkild 0.2.1b (which is a post-rael release). In mini only WaveShark is affected. The main reason this needs to be done I think is because of the archives, which only get taken if all the bots have the same number of pairings. --Skilgannon 14:02, 12 November 2009 (UTC)
Ok, both have been fixed. For the micros, all the older bots still had a Toorkild 0.2.1 pairing somehow. WaveShark had 5 pairings with retired bots, so the minis should go back to normal soon. If you see anything else, let me know -- I've been away from robocode for a bit due to work. --Darkcanuck 21:58, 14 November 2009 (UTC)
The whole of the roborumble now - I'm going on the fact that every bot which hasn't been (re)released in the last day or 2 has an extra pairing. Comparing to Diamond it looks like it may be an old version of DrussGT that didn't get removed properly. --Skilgannon 06:39, 16 November 2009 (UTC)
Weird -- fixing this right now. DrussGT 1.6.3 pairings were still active for most bots. As far as I can tell, there may have been a concurrent retirement and re-activation that corrupted the pairing data. The conditions and timing for this to happen are highly unlikely, but possible... I'll have to re-think this part of the database since retirement/reactivation has always been a headache anyway. --Darkcanuck 07:19, 16 November 2009 (UTC)
Not sure if this is related... but I just added a lightly moded version of demonicRage 2.2b , add thou it works as expected on my end.. In the rumble it seems it is being kicked out of the battles.. I removed it asap.. and will replace it with a new jar just in case.. -Jlm0924
From a quick glance at 2.2b's details sheet, it looks like bad battles are coming from a MeleeRumble client "DavidR". I tested 2.2b on my system and it seems to be working fine here, so I doubt it's anything with the bot. DavidR, if you see this, could you halt your clients for now until we can figure out what's going on? And FYI, it looks like other bots are getting 0 scores from DavidR, as well, such as Shadow here: [15]. --Voidious 23:20, 16 November 2009 (UTC)
Not related, but those results are indeed bad. I've suspended DavidR's uploads with a note to check the wiki. I also limited removals to my client until I can resolve the other issues. Now to look into how much cleanup will be needed... --Darkcanuck 03:42, 17 November 2009 (UTC)
.Hi, I just checked in this morning and saw the error messages in the console of MeleeRumble. I've stopped processing battles. Do you have any idea what was causing the problems? I was using the superpack downloaded from the Wiki. I'm using Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_20-b02-315) Java HotSpot(TM) Client VM (build 1.5.0_20-141, mixed mode, sharing) in a Mac Pro.--DavidR 10:11, 17 November 2009 (UTC)
It might have something to do with running Java 5? AFAIK all bots *should* still be compiled as Java 5 compatible by adding -target 1.5 to the compile, but one can't be sure of these things =) --Skilgannon 10:36, 17 November 2009 (UTC)
Well, Java 5 wouldn't explain an issue with Shadow I'm pretty sure... DavidR, what happens if you run a bot like Shadow outside of the rumble scripts, in that robocode instalation? --Rednaxela 13:56, 17 November 2009 (UTC)
Running Battles between two abc. Robots..
abc.Shadow and abc.Tron
Created a Batle with the following 5 robots.
abc.Tron 2.02 that the Robocode platform indicates was Built for 1.5.4:
Initializes Correctly
abc.Shadow 3.84 and 3.84g (indicate that were Built for
They fail to initialize:
SYSTEM: Skipping robot: abc.Shadow 3.84(g)
abc.Shadow 3.83 and 3.83c (indicate that were Built for
Both initialize Correctly

Hope this helps --DavidR 14:47, 17 November 2009 (UTC)
Interesting. Any other messages such as in the commmand line robocode is run from, or in the console of a failing robot (click it's name in the right hand bar)? There should be errors somewhere saying exactly what failed. --Rednaxela 14:54, 17 November 2009 (UTC)
I probably forgot the -target flag in my latest eclipse instalation, shouldn't the server reject zero scores anyway? --ABC 15:00, 17 November 2009 (UTC)
Darkcanuck said that there are no reason to reject zero score. So the server accept them. --Nat Pavasant 15:04, 17 November 2009 (UTC)
Yes, many top bots legitimately crush their opponents with a zero score. While this is unlikely in melee, it is still possible, especially if the bot crashes. The reason zero scores were originally excluded was due to a bug in the rumble client that is long since extinct. --Darkcanuck 16:36, 17 November 2009 (UTC)

Here is a paste of the Terminal:

pc001598:roborumble2 david$ ./ 
Using robohome .
Creating file
Preparing battle...

Let the games begin!
2009-11-17 15:14:01.714 java[10288:81f] CoreDragCreate error: -4960
Round 1 cleaning up.

abc.Shadow 3.84 still has not started after 23610 ms... giving up.

The console of the falling robot just says: SYSTEM: Skipping robot: abc.Shadow 3.84 --DavidR 15:16, 17 November 2009 (UTC)

I just tested Shadow 3.84g on my MacBook with Java 5 and it failed (without any helpful errors), and it worked fine under OpenJDK 6. So I'm guessing there's nothing wrong with DavidR's install and a handful of bots just need to fix dependency on Java 6. --Voidious 15:37, 17 November 2009 (UTC)

It will be a happy day when we can toss java 5 compatibility. Java 7 might be out by then though, haha. --Rednaxela 15:59, 17 November 2009 (UTC)

I'll try to install OpenJava 6 here and use for Melee... Voidious, Did you install The SoyLatte version? --DavidR 16:02, 17 November 2009 (UTC)

Actually, SoyLatte is what I usually use, I just happened to install OpenJDK 6 recently and tried that in this case. I can definitely recommend SoyLatte for Robocode on older Macs that don't have Java 6. --Voidious 16:15, 17 November 2009 (UTC)

I don't remember the rumble client even running on a Mac with Java 5, but maybe robocode fixed that? DavidR, if you have a 64-bit capable processor and Leopard you can download an optional Java 6 package from Apple's web site. I used that before moving to Snow Leopard. --Darkcanuck 16:36, 17 November 2009 (UTC)

I've switched to

java -version
java version "1.6.0_15"
Java(TM) SE Runtime Environment (build 1.6.0_15-b03-226)
Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-92, mixed mode)

And still abc.Shadow 3.84g doesn't load, but this time it gives more details in the Console

abc.Shadow 3.84g: Throwable: robocode.exception.RobotException: You cannot
 call the getRoundNum() method before your run() method is called, or you are 
using a Robot object that the game doesn't know about.
robocode.exception.RobotException: You cannot call the getRoundNum() 
method before your run() method is called, or you are using a Robot object 
that the game doesn't know about.
	at robocode._RobotBase.uninitializedException(Unknown Source)
	at robocode.Robot.getRoundNum(Unknown Source)
	at Source)
	at Source)

Is it now a problem of using a 64bit Java? I'll go home now, so I'll get back to this tonight or tomorrow.. --DavidR 16:31, 17 November 2009 (UTC)

(edit conflict) So if only "new" bots from ABC and justin are affected due to lack of Java 5 support then I'm inclined to chalk this up as a bot issue and re-enable DavidR's client. Which suits me just fine since reverting all those melee battles will be a real nightmare: 239 battles * min 45 pairs per battle = 10755 pairs to rescore. I'll wait for some more feedback to accumulate here and make the change this evening. --Darkcanuck 16:36, 17 November 2009 (UTC)

I threw 2.2b back in the rumble, thinking it was safe.. it did the same thing. :( I think my bot is someHow messed up.. I suspect I packaged it before updating the new bin location in robocodes' preferences.. I'll delete the bot and remove from depository like I should have. Sry for any head-aches... -Jlm0924 19:59, 17 November 2009 (UTC)
Make sure you delete all the .class files when you recomplile for Java 5 otherwise it won't re-generate them. --Skilgannon 09:00, 18 November 2009 (UTC)

Can't say I agree 100% with the new zero scores policy, it "shielded" de rumble from this type of problems. How was the ELO/Glicko formula changed for the 100/0 case? --ABC 13:49, 18 November 2009 (UTC)

There were no changes to either formula -- as far as I can tell there is no issue with a 100-0 score in either system. Which makes sense, since they're both adapted from chess where scores are normally 1-0. I feel it's unfair to to penalize bots that score 100%, especially if their opponent crashes. It's also unfair to the author of the crashing bot, since they may not realize there's a problem, especially if it's intermittent. Apart from rael's battles (and we never learned the source of that problem unfortunately) I've had to revert very few battles and discarding zeros would not really have helped in any of those cases. --Darkcanuck 16:33, 18 November 2009 (UTC)
A malfunctioning client is very hard to detect, the only way I can think of is to check for abnormal deviations of the 'normal' score. For new bots however, there is no normal score yet, so what would the abnormal score in that case. Also think of BulletCatcher, which gets extreme scores but is valid bot. A few things can be checked in a easy way though, like in one-on-one the sum of the score must be more than 2100. Also a score of 2100-0 means that one bot is crashing immediately. In melee the only thing I can think of is that a 0-0 score definately is wrong. Alas no record is kept of which bots are fighting in a battle, so there is no possibility to flag an entire battle as invalid. I think it is fair to count the zero scores in one-on-one, except maybe those 2100-0, as there are several very good bots and also several very 'not so good'. --GrubbmGait 22:31, 18 November 2009 (UTC)

DavidR, you're back in business -- I've re-enabled your client. Before doing any more uploads, first delete all the files in your roborumble/files and roborumble/temp directories. That will clean out cached data and give your client a fresh start. Also, I strongly recommend runnning a single iteration then check the console output for any errors or strange results. --Darkcanuck 16:39, 18 November 2009 (UTC)

I'll restart the battles using java6 64bit mode in the Mac Pro. I'll do one iteration and check the output... if it's ok I'll then setup this machine to be on continuous mode... --DavidR 16:26, 19 November 2009 (UTC)

Darkcanuck, I've run a few iterations now. Everything seems ok. Could You check if the results are good? I'll set this up in aprox 4h so it can run overnight. --DavidR 17:13, 19 November 2009 (UTC)

Hey guys, I'd like to start contrib to running some running some rumbles.. I dLoaded and installed open the meleeRumble.txt from the latest version of robo code; added my user name; then saved it (over writing) my meleeRumble.txt... I assume that is all thats needed.. I 'll try one iteration... Pls let me know if all is well.... -Jlm0924 18:10, 20 November 2009 (UTC)

Take a look at RoboRumble/Starting_With_RoboRumble - you can download a 'superpack that has all the textfiles pre-configured and most of the bots already installed. --Skilgannon 18:24, 20 November 2009 (UTC)

Thx Skilgannon, was having difficulty locating the info....-Jlm0924 18:53, 20 November 2009 (UTC)

Looks like there's another problem with a semi-retired bot. Thorn 1.252 and Toorkild 0.2.2 both have a battle against Toorkild 0.2.1 in General and Micro Rumbles, so those rumbles are never getting to "pairings complete". --Voidious 04:47, 14 December 2009 (UTC)

Actually it was Garm 0.9z. Thanks for the catch, should be updated next time those bots get a battle. --Darkcanuck 05:09, 15 December 2009 (UTC)


I recently tried out and it seemed much smoother for development purposes and was now wondering... Who thinks it's ready to enable that version for the rumble? It seems very solid to me :-) --Rednaxela 16:22, 7 January 2010 (UTC)

I haven't been active for several months, besides running a client. I don't see much in the way of bugs on the tracker though. Have you tried out this version locally? If you can run some iterations of 1v1 and melee with uploads disabled and the results seem ok, then I would be open to having one or two lower volume clients try out --Darkcanuck 03:10, 8 January 2010 (UTC)
It can have a lot more bug than those which are currently in the tracker. If you read version.txt you will see that some bug just fixed recently without any artefact number (thus it isn't in the tracker). From the redesign we create a lot of bug, I'm sure of. Please note that some score difference may occurs due the movement behaviour changes. But still, if we don't change the version, we won't be able to change it till forever. Fnl once said to me that he is really tired of fixing bug instead of adding new features. For one thing that I'm very sure of, is that a lot of new features is coming is the 1.7.2 version. We even have separated branch for it right now. 1.8 should come with .NET capability. I don't think we'll have any more stable version beside 1.7.1.x for now. --Nat Pavasant 11:49, 8 January 2010 (UTC)
It's true that it could be much buggier than the tracker indicates, but I haven't found any bugs (except one UI one that I need to get to reporting) in my testing so far. Basically, I think the approved rumble version could really really really use an update, and yeah, 1.7.1.x looks like it's the most stable things are going to be for a good while. I really do doubt score differences will be significant from the movement changes. Anyway, I'm going to start testing as a rumble client with upload disabled some time tonight. --Rednaxela 20:06, 8 January 2010 (UTC)
Does this still do stop and go movement different than If so, It'll affect me a bit, but I can adjust and adapt. I've noticed that this version also compiles a bit faster as well which is nice. --Miked0801 23:59, 8 January 2010 (UTC)
Hmm, not really, depending on what you mean. Stopping and going in one direction would act the same. The changes it makes are:
  • When decelerating through 0, you decel for part of the tick and accel for the other part. So you'd go from 1 to -0.5, or 0.5 to -0.75. You can no longer vibrate between 1 and -1 (you'd end up around +/-0.7.)
  • If you use setAhead(x) or setBack(x), you will always move that distance in the optimal number of ticks. I highly doubt any bots really depend on or benefit from the old behavior.
There are some benchmarks and links to other discussions here, if you're interested: User:Voidious/Robocode Version Tests.
--Voidious 01:24, 9 January 2010 (UTC)

Okay, I've build a new rumble superpack for, this time with twinduel support as well, and I just fixed all the bot download links when freshly downloading them all. So far running it with uploads disabled seems to be working well except for the following:

Fighting battle 2 ... nexus.Prototype 1.0,simonton.mega.SniperFrog 1.0
Got an error with simonton.mega.SniperFrog: java.lang.ClassNotFoundException: Robots are not allowed to reference Robocode engine in package: javax.swing
Could not load robot: simonton.mega.SniperFrog
Skipping battle because can't load robots: nexus.Prototype 1.0,simonton.mega.SniperFrog 1.0
Got an error with morbid.MorbidPriest: java.lang.ClassNotFoundException: class morbid.EnemyFireDetectedEvent overrides final method getTime.()J
Could not load robot: morbid.MorbidPriest
Skipping battle because can't load robots: morbid.MorbidPriest 1.0,drd.Dreadknoght 0.9

The first I consider this non-issue because really... a robot shouldn't be calling Swing stuff when running IMO. The second I'm investigating (they might happen with anyway). Any other thoughts? --Rednaxela 04:14, 9 January 2010 (UTC)

The first one seems like a reasonable restriction put in by Fnl, so unless we want to never allow a new version of Robocode, we're gonna have to accept its effect on SniperFrog. I could drop an e-mail to Simonton to let him know in case he wants to fix/rerelease it - he's not actively Robocoding, but he's still around.
The MorbidPriest situation seems kind of unfortunate. Sounds like he has a getTime() method in a subclass of some event, and that method has since been made final? That's an ancient bot that's kinda neat to still have around, but I'm sure we'll all live. =) Thanks for taking the time to investigate --Voidious 04:43, 9 January 2010 (UTC)
As a note, I've reported tickets on for both issues, but on the first made a note that I'd close as wontfix if it were me, but that I was reporting it anyway as an informational thing. Just thought I should put the tickets there so Fnl knows, even if I very much expect both to be valid wontfix situations. --Rednaxela 04:54, 9 January 2010 (UTC)
Found another one --Rednaxela 05:06, 9 January 2010 (UTC)
Just curious - do you get this same error if you run Robocode normally with Blur? Runs OK on my system in I was gonna test if just un/rezipping the .jar would fix it. Also, just wanted to note that Blur is open source, so if it required code changes, we could fix it like we did SilverSurfer and TheBrainPi. --Voidious 07:04, 9 January 2010 (UTC)
Actually these restriction is from Zomboch, but that doesn't really matter.
The problem with final thingy has appeared some times ago too. With normal design, most class of the API that use for communicating should be made final, or it might cause confusing between two code if any of that override them, right? So since when I don't remember Pavel (Zomboch) made them all final, and that time we have problem with robot that have class that extends ScannedRobotEvent, and it really make Pavel unhappy to remove the final keywords. I think we should decide that will we keep the sake of design of the backward compatibility.
About the Blur's one, it seems be bug due the fix from Skilgannon's report's 419486 Lockup on start if too many bots in robots dir (cont'd). --Nat Pavasant 09:23, 9 January 2010 (UTC)
Actually even if Swing isn't call, just having this:
package aa;

import javax.swing.JFrame;

class A extends Robot {
    JFrame a;
can already trigger the error. The protection of Swing is quite complicated. Normally in Robocode we use Java's Permission to allow access to class as long as it doesn't do anything harmful, and that include AWT. But with Swing, we don't have, AFAIK, any Swing Permission, so this forced us to completely disallow loading any classes from package javax.swing. Thus it create problem with above code, as I have already pointed to Fnl and Zomboch once. --Nat Pavasant 10:36, 9 January 2010 (UTC)

Feature Request

I think it would be fun to be able to compare bots of higher weight divisions against the lower divisions. I'd love to see just how dominant Druss or Shadow or whoever would be if compared only to Nanos or Micros. It would help me as a development tool as well knowing where I still have weaknesses to address in my own, nano strategies. --Miked0801 17:44, 15 February 2010 (UTC) beta Testing!

Right now I'm testing beta in hopes that everything necessary should be resolved so that will be rumble-ready. If anyone finds any issues with Beta or wants to see how the current situation looks, see the bug tracker. Right now I'm running the client with uploads disabled, and later today will set up a localhost rumble server for it to upload to, so that I can more easily search for bots that score much different than in Other people testing as well would be good too of course :-) --Rednaxela 18:27, 16 February 2010 (UTC)

Summary of results so far:

  • Long-time memory leak issues (exists in and much earlier as well) are much improved, but don't seem entirely fixed.
    • I will have a second look at this for sure. --Fnl 23:17, 20 February 2010 (UTC)
  • rc.yoda.Yoda 1.0.6c - It was relying on the old .robocache directory containing the extracted class files, and Robocode now reads the jar files in-place. Ticket will probably be WontFix. I have the author's blessing to release a version fixed for 1.7.x
    • Reading internal stuff i fancy ways is definitely not recommended. Don't count on internal directory names and structures. Note that '.data' (formerly '.robotcache') is still subject to change! --Fnl 23:17, 20 February 2010 (UTC)
  • hvilela.HVilela 0.9.3 - It was relying on casting what it was passed in onPaint to "robocode.robotpaint.Graphics2DProxy" which wasn't supposed to be an 'official' part of the API. Ticket will probably be WontFix.
    • Again, don't access internal stuff. --Fnl 23:17, 20 February 2010 (UTC)
  • yk.JahRoslav 1.1 - It has strange intermittent issues. The cause may not be Robocode 1.7.x. Still need to look into this more.

I'm going to keep my primary client running as Beta, on a localhost roborumble server and continue to collect further results so I can see if anything looks amiss. --Rednaxela 05:36, 17 February 2010 (UTC)

New result:

  • Homer.Barney 1.0 - It tries to write it's data file without using RobocodeOutputStream. Robocode 1.7 is stricter about this and the bot clearly shouldn't be doing this. Not sure I'll bother making a tracker ticket for this one.

Things are seeming in pretty good shape for the most part. Any comments on what other people think need to be looked out for would be good. --Rednaxela 14:12, 18 February 2010 (UTC)

I am very happy to hear that you have started testing this. I really hope it fulfills everything necessary for RoboRumble. If not, please report all bugs you find. Note that robots are only supposed to use the public APIs, i.e. all the classes you find in the 'robocode' and 'robocode.util' packages. Don't rely on internal stuff in Robocode as it might/will change. --Fnl 23:17, 20 February 2010 (UTC)

Got up to 20 thousand 1v1 battles and no issues discovered except for what I've already reported (personally speaking, I'm most concerned with how the memory leaking doesn't seem entirely fixed. See my most recent comments in ticket 2930266). I just realized now that I had forgot to test melee and team which are also important! I also still need to do a pairing-by-pairing comparison between the incomplete pairings I have on my localhost rumble server, and the matching pairings on the main rumble server. Busy with studying for midterms now so I don't have time to set up that pairing comparison, but since I think that's probably enough 1v1 data at least for now, I'm switching to testing melee. Hopefully there will no new issues. --Rednaxela 16:53, 21 February 2010 (UTC)

Thanks for putting in so much effort to test the latest version! If melee and teams look good, I can setup the server to accept uploads for Beta from your client only as a test. --Darkcanuck 18:01, 21 February 2010 (UTC)

Here's a raw table of the biggest differences in melee score. Note, mine are only after 1.6% of the total melee battle count (still a lot!) on the main server, so some of the differences could just be noise.

Competitor Rank APS Survival Rank APS Survival APSDiff RankDiff
hvilela.HVilela 0.9.3 98 55.4 60.44 287 0.49 18.76 -54.91 -189
ph.melee.ArcherME 0.3 136 51.96 36.61 288 0.36 18.75 -51.6 -152
com.syncleus.robocode.Dreadnaught 0.1 280 29.24 22.18 235 40.44 21.46 11.2 45
jlm.javaDisturbance 0.59 85 56.83 67.73 166 48.36 60.17 -8.47 -81
stelo.Moojuk 1.3 279 30.34 22.59 264 35.88 28.65 5.54 15
bayen.nut.Squirrel 1.615 241 40.3 25.71 193 44.83 44.42 4.53 48
justin.DemonicRage 2.5d 6 67.88 91.25 20 63.65 88.75 -4.23 -14
jab.DiamondStealer 5 275 32.03 20.23 280 28.14 18.5 -3.89 -5
lrem.micro.SpikeSoldier 0.4 142 50.98 30.69 119 53.42 33.88 2.44 23
sch.Simone 0.3d 137 51.9 61.89 155 49.7 58.6 -2.2 -18
stelo.SoRobotNanoMelee 1.2 141 51.18 55.7 122 53.2 58.61 2.02 19 1.0 140 51.24 48.96 157 49.39 39.41 -1.85 -17
stelo.SoJNanoMelee 1.1 226 41.65 38.14 240 39.93 39.4 -1.72 -14
tripphippy.Alice 1.1 135 52.01 56.02 116 53.62 60.14 1.61 19
jrm.Test0 1.0 168 48.49 31.27 151 50.07 32.51 1.58 17
cx.Princess 1.0 70 58.69 76 55 60.25 75.29 1.56 15
pedersen.Moron 2.0 288 13.65 18.3 286 15.17 17.51 1.52 2

I'm going to investigate the higher differences in detail later tonight, but ones like the moron diff are just because the top two are managing to fail even more than moron. Other than these results, all of the APS score differences are below 1.5, which I think can purely be attributed to noise given that pairings aren't quite complete (not that pairing completeness matters as much in melee) and that some bots have less than 500 battles in my Beta data. --Rednaxela 16:02, 27 February 2010 (UTC)

As a quick note, team rumble in Beta fail fantastically so that nothing even can upload to the rumble server. I don't have time to post a bug tracker ticket or give a more in-depth description yet, but suffice to say, I'd like to see a Beta2 or release candidate with this fixed so so I can test, before the final release. I'll make a bug tracker ticket tonight. --Rednaxela 18:37, 27 February 2010 (UTC)

Thank you guys for putting a big effort into testing Beta! :-)

Keep reporting bugs on SF as soon as you discover a new one. It would be cool to have a very stable and hopefully as good as "bug free" version of Robocode - especially for RoboRumble, which is crucial. You can see all open bugs here (+ which bugs that have been fixed since the last released version). --Fnl 21:40, 1 March 2010 (UTC)

Some web suggestion

Hi Darkcanuck, I think that you should add div#header {position:fixed} to your css. I believe it would give better access to the top navbar especially when you are scrolling down the page. Of course, this doesn't work with older IEs =) --Nat Pavasant 09:48, 2 May 2010 (UTC)

Optional XML

Can we get an optional XML output for the "RatingsDetails". Something along the lines of (but doesn't need to be formatted exactly like this of course). Useing a GET or POST format=xml.

<?xml version="1.0" encoding="UTF-8" ?>
<data id="details">
		<string id="name">chase.s2.Seraphim 2.0.6</string>
		<string id="game">roborumble</string>
		<real id="rating_classic">1283.908</real>
		<real id="rating_glicko2">1985.2</real>
		<integer id="pairings">770</integer>
		<integer id="pairsWon">745</integer>
		<integer id="numBattles">2169</integer>
		<integer id="lastBattle">1281719063000</integer>
		<real id="specializationIndex">395.27096860526</real>
		<real id="momentum">-32.920652692531</real>
		<real id="APScore">0.78032</real>
		<real id="APSurvival">0.87184</real>
	<data id="pairings">
			<string id="name">ab.DengerousRoBatra 1.3</string>
			<real id="ranking">1166.683</real>
			<real id="score">67.547</real>
			<integer id="numBattles">2</integer>
			<integer id="lastBattle">1279525810000</integer>
			<real id="expectedScore">60.801319242256</real>
			<real id="PBI">6.7456807577444</real>
		<!-- etc -->

Chase-san 20:13, 13 August 2010 (UTC)

Have you seen Darkcanuck/RRServer/Query? It's a query API for the RoboRumble server. It's pretty awesome and should let you do whatever it is you want from the XML output. (Or you could use it to create the XML if that's what you really want.) --Voidious 20:20, 13 August 2010 (UTC)

Wrong results?

I've the feeling that Rednaxela@ returns false results. For example this battle: gf.Centaur.Centaur 0.6.5 vs. rz.HawkOnFire 0.1. When I run the battle on my computer, I get ~93% total score for Centaur. Rednaxela@ returns 28%. There are more examples: Centaur vs. pla.Memnoch 0.5: My Computer ~67% for Centaur. Rednaxela@ 28%. Centaur vs. ahf.Acero 1.0: My Computer ~85% for Centaur. Rednaxela@ 30% Is this normal? Or is there a problem with my computer? Does Rednaxela@ upload false results? Has somebody an idea? I don't save data on the computer, so this isn't the reason. -- g0ld3nf0x

There were reports elsewhere on the wiki that Centaur runs very slowly. It may be possible that your bot is skipping turns on other people's rumble clients. Things to check first:
  • Are you running (official rumble client version) for your tests?
  • Is your CPU constant set higher than normal? (can reset by setting "Recalculate CPU constant" from the Options menu)
  • Does your bot use extra threads?
Once Centaur 0.6.5 has more battles uploaded we'll have more data to see if it's client-related. But Rednaxela is the maintainer of the rumble super pack, so it's unlikely that he's running a bad rumble client.
I would also encourage you to create a wiki page for yourself and your bot. --Darkcanuck 15:10, 19 July 2011 (UTC)
I just checked and Centaur is indeed skipping a lot of turns on my system too. You may not be seeing this on your system if you have painting turned on -- when paint is enabled Robocode doesn't enforce turn limits in case your painting code is slow. Try recalculating your CPU constant and run a battle with paint off to see this. --Darkcanuck 15:29, 19 July 2011 (UTC)
Thanks for the help! I noticed before that it's often skipping turns, but I didn't think that it would have much effect to the results. I will try to fix it. --G0ld3nf0x 16:20, 19 July 2011 (UTC)
I tested on my machine which is currently running the rumble, and gf.Centaur.Centaur 0.6.5 skipped something like 50% of turns, and on rare occasion would skip enough in a row that Robocode just through your bot was stuck and disqualified it entirely.
Your new version (0.6.6) skips many less turns (about 5 to 10 per round roughly), but just so you know that's still very high compared to most robots. It probably doesn't affect your score so much in 0.6.6 anymore, but would still be worth fixing really IMO. --Rednaxela 22:38, 19 July 2011 (UTC)

Can I have the code?

Or. better yet, can you throw it up on Github? Either way, I'd like to have it handy when I see things I want to improve. (Right now it was that I want the bots on a RankingDetails page to be sorted by APS.) If you host the code on Github I (and others) can fork it and send pull requests to the maintainers of the main repo.

I also want to express how impressed I am by the service. I can spend lots of time just browsing and checking battles and comparing bots and so on. Bloody awesome.

-- PEZ 19:49, 10 November 2011 (UTC)

Actually, the code is already posted in an SVN repository: It's mentioned on Darkcanuck/RRServer/Updates but it's kind of burried among other information. :) --Rednaxela 23:13, 10 November 2011 (UTC)

Thanks! I really tried to find the info before asking. I now put a link where I expected to find it myself. -- PEZ 19:35, 11 November 2011 (UTC)


Thread titleRepliesLast modified
retiring ELO column616:51, 17 February 2012
FatalFlaw's uploads have suspicious APS for Tomcat005:58, 16 February 2012
kidmumu uploads318:16, 1 February 2012
Feature Request: average APS diff in bots compare616:55, 17 November 2011
Performance123:48, 13 November 2011

retiring ELO column

Now that everyone's ELO rating is subzero in General 1v1 =), is it maybe time to retire it altogether?

Voidious23:07, 14 February 2012

I'm all for it =) Although, doesn't the LRP depend on ELO data? Maybe shift that over to Glicko data instead? And if there was some way to make the LRP show the 'expected' option by default... that would make my day =)

Skilgannon09:55, 15 February 2012

I'd also support removal of ELO from the rumble, and replacing it with Glicko or Glicko2 in the places that use it (LRP).

Rednaxela17:50, 15 February 2012

I also agree

Jdev18:53, 15 February 2012

Yeah, ELO doesn't do much anymore. So agreed as well.

Chase-san20:51, 15 February 2012

Elo is working fine, even with negative scores, but keeping both Elo and Glicko-2 is redundant. So, removing one of them is fine by me.

MN21:00, 15 February 2012

Luckily we still have the music . . .

GrubbmGait16:51, 17 February 2012

FatalFlaw's uploads have suspicious APS for Tomcat

FatalFlaw's uploads have suspicious APS for Tomcat:

  • lxx.Tomcat 3.55 VS 1.88
  • lxx.Tomcat 3.55 VS "baal.nano.N 1.42
  • lxx.Tomcat 3.55 VS gf.Centaur.Centaur 0.6.7

Darkcanuck, can you rollback all his uploads?

Jdev05:58, 16 February 2012

kidmumu uploads

Results from kidmumu uploads don´t come close to my uploads. Is there something wrong?

MN22:03, 29 January 2012

I haven't had a chance to check if this could affect mn.Combat, but my #1 guess would be that perhaps it's a java version issue (i.e. kidmumu is using Java 5 and Combat requires Java 6?).

Failing that, I'd have to think that kidmumu's client may be skipping turns.

Rednaxela20:21, 31 January 2012

[Combat vs Corners]

[Combat vs MyFirstRobot]

[Combat vs TrackFire]

Probably a Java version issue. I´ll downgrade to 1.5 in future versions. But didn´t check other bots scores.

MN17:56, 1 February 2012

I'm sure there are lots of bots that require Java 6, right? We might want to have Darkcanuck rollback all his uploads until we can get kidmumu onto Java 6.

Voidious18:16, 1 February 2012

Feature Request: average APS diff in bots compare

I find that until all pairings is done it's very useful to know current avarage difference in APS between two versions - after about 100 random battles this number says fairly exactly is newer version better, than older.
Darkcanuck, can you schedule to add row for columns "% Score", "% Survival" in section "+/- Difference" in bots compare page with avarage value of corresponds columns? I think, there're work for 1-2 hours maximum

Jdev11:12, 17 November 2011

I think this is already covered by the 'Common % Score (APS)' and 'Common % Survival', the lowest two lines in the top-table. At least I use it to check if my changes have a positive (or negative) result when the pairings are not complete yet.

GrubbmGait13:10, 17 November 2011

No. May be i wrote not clear.
I mean, that i want to know average difference in pairings between 2 versions. According to my tests, this number stabilizes mach faster, than APS. And more, Common % Score does not make sense, because while there only 1 battle in every pairing it's exactly equals to APS and in another case, there may be 10 battles against Walls and 1 battle against Druss.

Jdev13:32, 17 November 2011

As far as I know, when your new version has for example 100 pairings, you will see the average APS for that 100 pairings. AND for your older version you will also see the APS for that 100 pairings. And you are right, this indicates much more reliable what your final score will be (relative to your older version) than plain APS. The one who can really answer this question is Darkcanuck.

GrubbmGait13:46, 17 November 2011

Wow, if things is like you say, it's really what i want, thanks:)

Jdev13:53, 17 November 2011

The common %score is calculated just like APS, but only for pairings that the old and new versions have in common. That makes it easier to compare two versions when the new one is still missing many pairings, or in the case where the old bot may have pairings against a lot of retired bots (and may be missing scores vs newer bots). I think that's what you're looking for...

Darkcanuck16:51, 17 November 2011

Yes, thank you:)

Jdev16:55, 17 November 2011


Can you turn on .htaccess browser caching for results.

ExpiresActive on
ExpiresByType image/gif "access plus 1 year"

Other performance enhancing things you can do are: Set specific size for the images, inline or via CSS. CSS would be easier. This would speed up the page loading and be also less annoying while all the requests are going through (having default sized images deforming the table before they load). Minify the HTML/CSS/JS (less to send).

Not doing/doable for known reasons: Serve identical files from the same url. (flag images)

Chase-san22:31, 11 October 2011

Just added the cache expiry directives -- let me know if that helps. The minification isn't necessary, my server already sends all files gzip'd, so the performance enhancement would be minimal at best.

Darkcanuck23:48, 13 November 2011