Talk:LiteRumble
- [View source↑]
- [History↑]
Contents
First page |
Previous page |
Next page |
Last page |
Since today, when uploading results, this message keeps appearing in my RoboRumble client:
java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults
Did anybody observe similar results?
Ah, this was caused by a bug due to a new check I added for dropping battles more than 24 hours old (to prevent old versions being added back by mistake). Can you try again and let me know if it is fixed for you now?
It seems that when uploading out-dated pairings, it is still receiving http 500, which cause those out-dated pairings to be doubled (another bug) and re-uploaded twice each time... And then failing with doubled time, which grows like crazy.
Will you change the response to something like "200, out-dated pairings dropped" or so to fix this? Thanks ;)
Well, that was the intention, but it seems that Python doesn't auto-convert datetimes to strings so my logging was crashing it. Should be fixed now!
More information:
java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985874,SERVER abc.Shadow 3.84i,24248,6434,28 maribo.mini.MiniQuester 0.1,16657,4100,5 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985874,SERVER abc.Shadow 3.84i,24248,6434,28 aaa.ScaledBot 0.01d,15278,3625,1 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985874,SERVER abc.Shadow 3.84i,24248,6434,28 mld.DustBunny 3.8,14411,3784,0 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985874,SERVER abc.Shadow 3.84i,24248,6434,28 cb.nano.Insomnia 1.0,11918,2981,1 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985875,SERVER abc.Shadow 3.84i,24248,6434,28 ayk.WallHugger 1.0,8941,2568,0 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985875,SERVER abc.Shadow 3.84i,24248,6434,28 yk.JahRoslav 1.1,8499,1922,0 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985875,SERVER abc.Shadow 3.84i,24248,6434,28 rampancy.Durandal 2.1d,7268,1522,0
this is what I'm getting constantly (everytime it uploads).
It's been a while since we have updated the rumble client version, and the new version brings several important fixes. I'd really appreciate if someone set up a quick benchmark for a battle or two for each bot in the rumble, and then run it on old and new versions to make sure we don't have any regressions. Once this is done we can upgrade the client =)
As far as I know, Robocode 1.9.3.0 hasn't been officially released yet. The website and GitHub still name 1.9.2.6 as the latest version, and there is no 1.9.3.0 download. You can only get it by building from the latest git master. What I linked to was a draft of the new changelog.
I don't think any new releases will be made until poor Fnl finishes dealing with all of the bugs reports I piled onto him. What I have been doing for the past week is emptying my mental list of annoyances with Robocode onto its bugtracker.
So currently, it is still in development, and it's a bit too early to do regression testing with this new version.
What does need testing, however, is Robocode on Java 9. We already found CPU constant calculation and team JARs to be broken there, and doubtlessly there are more issues.
Robocode 1.9.3.0 has been released.
Great. As soon as we have a benchmark comparison making sure no subtle score changes have crept in or tons of bots are now broken I'm happy to change the LiteRumble over!
Recently, I noticed that more than half of the battles are dropped as queue is full — however, this won't happen even if I wait a few minutes. Seems that all the rumble clients are uploading battles periodically, and that upload is pretty concentrated — e.g. All four clients of mine upload ~200 battles within ~3 minutes, which makes the queue get full immediately. And If I take a look at literumble/statistics, I can see that there are 5 to 7 clients uploading within 2 minutes.
It generally takes a client about 15 min to finish 50 battles, but if we vary this to primes, the uploads will get evenly distributed, reducing the high concurrent which causes a lot of dropped battles.
Reducing NUMBATTLES would probably help here too. It would also reduce the delay which is the main cause of duplicated pairings for new bots being entered. Maybe a NUMBATTLES of 20 in the main rumble would be good enough to solve the client component of this.
However, I think one of the main causes of the full queue is the batch processing for Vote/NPP/KNNPBI, since the queue needs to paused while this is running. Because it is paused the projected processing time goes very high, and it stops accepting new uploads. I have an idea on how to tune this, it should help a bit.
However, even a NUMBATTLES of 3 can't prevent most of the battles from being dropped ;/
Seems that with 8 clients running the rumble at the same time, no attempt will help without stopping some clients.
Worth mention that I can notice dropped battles when there are 6 clients, also not frequently. Seems that with 2 more clients, the effectiveness dropped considerably?
Btw, one thing that's really interesting is that the duplicates of multiple versions can last hours. Seems that some clients are not checking participants list for hours.
Got it — maybe after the queue is paused for batch tasks and then resumed, it keeps near full as there are still much parings uploaded. Like some DoS, this decreases the ability to handle high concurrent (although the average pairings uploaded per minute is not very high, they came in during a short period of time, and get dropped)
Then I think increase the queue size a little after batch task (and then decrease to normal size slowly to make sure new uploads won't wait forever after some flood upload)
Or, we can handle uploads during pause separately — don't let them take place in normal queue, rather, store them in a separate queue (and cap it with normal uploads per minutes * pause time).
I was running 8 clients, that was probably causing it. Particularly melee clients cause a huge number of uploads for the amount of processing time required by the client.
I'll save my clients for when there are less others running =)
I've been experiencing constant "queue full" messages in the past 2 hours in MeleeRumble, with 3 melee clients + 3 rumble clients. This should really be happening this often?
I noticed that every time the queue is paused for batch tasks, not until I pause the clients for a few minus, did the massive queue full messages stop.
That may because when the queue size is near max size, the capacity of handling high concurrency decreases dramatically, although the average processing power doesn't decrease at all.
Use a separate queue when it is paused may help, imo.
Would you mind adding another column called Opponent APS in bot comparison? When sorting with opponent APS, it could be really useful to see the difference of two bots against bots with different APS range, as in the Diff Distribution graphic, but with more information, especially the bot name. This could also help us to create a good test bed ;)
I can take a look (although not this weekend, I'm away from home). However, what would you consider appropriate behavior on bots which had been removed from the rumble, but which are a shared pairing? The APS/diff image does this by just ignoring those pairs, but I don't think we want to do that here. Do I put a 0.0?
Can we assume that APS is relatively stable? Since we can click into the detail page to see the history APS even when that opponent is removed, can we simply put that value?
Opps this assumption breaks when comparing ancient bots ;( Then polluting the table must be a bad idea. However, why don't we use NaN or N/A instead of 0?
NaN sounds most appropriate. I don't want to have to fetch each bot object that is not in the rumble anymore to look up its last APS.
Done. I also added a link on the BotDetails page to find the bot on the wiki.
Awesome! Thanks a lot.
Since you in a wish granting mood, would it be possible to have api call which returns only summary table with APS, PWIN, etc for a given bot in a given game. Right now, I parse http://literumble.appspot.com/BotDetails?api=true
but its spits the whole comparison table, which is overkill and wastes the bandwidth. All I need is info stored in the header table.
I do it to plot APS vs bot version for my bot, but I can imagine other will be interested in this too.
LiteRumble says OK. Queue full,XXX vs XXX discarded.
and it is discarding hundreds of battles :\
If the queue gets too long then the priority battles have a severe lag, so the rumble gets really inefficient. Max queue size is based on projected processing time.
Hi, after recent bot removal and restoring we have strange artifacts: asymmetrical pairing reports.
Have a look at Galzxy 01 stats and sample.Walls 1.0 stats. You can see that Galaxy 01 has 18 battles against Walls. Byt if you look at Walls stats there are no reports on these 18 battles with Galaxy 01. Galaxy 01 is simply missing in the list of Walls battles.
You just need to wait for Galzxy to get another battle, and it will be fixed again.
https://dl.dropboxusercontent.com/u/4066735/literumble-template.zip is not available now ;(
And archive.org doesn't has an archive of it ;( does anyone have a backup of it?
By the way, I'm really wondering how is LiteRumble working ;) I used to think the battles are all on the cloud, but then I discovered http://literumble.appspot.com/RumbleStats which shows a lot of contributors with familiar names ;) How can I set the battles to run on my computer and submit the results to LiteRumble? Didn't see any discussion about it.
That isn't needed anymore, the newer versions of Robocode are preconfigured to support Literumble.
Just download 1.9.2.5, edit robocode/roborumble/[roborumble/meleerumble/etc].txt to have your name, and you can run battles on your computer to contribute to the rankings. The website just displays the battles that users have uploaded in a nice way.
It should be tested a lot to be sure that there isn't any errors.
I have created my own LiteRumble instance running as a google app, as described in previous discussions. Now I want to know if it is possible to delete the battle history and the participated robots? I am experimenting with it since we want to have a roborumble event at our office and I want to delete my previous "testing" robots and matches and have a clean slate when we do the event.
You should be able to delete the data from the AppEngine web console. Otherwise you can simple make the clients upload to a different named rumble, and the old one can be for the demo/setup bots.
I have tried to remove the data from the datastore by selecting all database entries and delete them. But the data on the webpage is still there, so the data must be stored somewhere else. To create a new rumble seems like a annoying workaround :)
There may still be a copy in Memcache - if you clear Memcache and the datastore everything should be gone.
One thing I really missed from the old rumble was the LRP, but without ELO/Glicko we can't really do the whole straight-line fit any more. So, instead I have added a Score Distribution image on every bot's details page. The red is APS and the green is Survival (as seen in image the mouseover). The image is directly embedded in the HTML using data URIs, so if you are using IE, 8 and later only, otherwise pretty much everything supports it. I'm also planning to add this to the BotCompare page so you can analyse differences in score compared to opponent score for both APS and survival.
Ahhh, neat stuff. That's very nifty with directly embedding the image data there. For some reason the image is displaying very tiny for me though under Firefox 20.0. It gets scaled to the box around it properly under Chromium, but not Firefox.
EDIT: Nevermind... the styles.css file was being cached and that was the problem. A ctrl-r fixed it.
Ah yeah, the styles.css was changed so you need to do a hard-reload.
I've now added the KNNPBI to the bot-details Scores Distribution, and the bot-compare has a Diff Distribution.
There is something fishy with a chart in the right part close to the end. If you look at above CunobelinDC score distribution you would see that there is no corresponding red points for stronger opponents, while blue and grean are there. This is quite common theme for other bots as well.
Also have a look at this EvBot score distribution you would see the problem with normalizing, i.e. about 1/4 of the space in the right part of the chart has no points. Which is non optimal use of the chart space.
Is it still showing the problem? I don't see anything wrong right now. I had some issues with (I suspect) bad bytecode and versioning, but that should be fixed now.
As for the EvBot chart, that is because in meleerumble nobody gets higher than ~75%, so the top 25% is empty. Although I guess I could normalise to the top score, I'd rather have the charts consistent as better bots are released.
Aha, I see now why melee charts were somewhat off.
But I insist that I do not see red points for X>95% for CunobelinDC. Look at 5 the rightmost green points, I cannot locate red (APS) or blue for the same X values. It might be aliasing problem or may be points are just on top of each other.
Green is survival, and so the X value is the average survival score of the enemy bot. The red and blue use enemy APS as the X value, not survival, and since survival scores are higher the green dots go further to the right.
I've actually thought about changing the X axis to just be enemy APS to make it easier to interpret. Or ordering the X-axis by rank instead of using APS values.
I've changed it so they all use APS on the X axis, so it should be clearer now.
Does anyone have some advice for starting up a custom and/or private LiteRumble? I've got a new batch of programming students that I'm leading through Robocode and I'd love to run a custom bracket with just my kids in it as I've done in years past.
Sure, it's easy enough.
- Create your own app on Google AppEngine
- Download and extract the code from bitbucket
- Change the app name in app.yaml to the name of the app you created
- Download and install the Google AppEngine python SDK
- Run the following in the code directory:
appcfg.py update . && appcfg.py update batchratings.yaml
- This should give you an empty LiteRumble instance running on your app
Once you have a copy of LiteRumble running, all you need to do is modify the rumble client in roborumble.txt
to point to your new server for uploads. You also need a new participants list, which you can host on appengine too if you don't mind continually re-deploying, or you can make a wiki page somewhere. The client just parses everything between the two <pre> tags.
Have fun!
Excellent. I can just host participants on a Dropbox text file. Thanks for the info!
By the way, a favorite thing I do when introducing my kids to Robocode is to have a pair of them (driver and gunner) pilot sample.Interactive at a moderate simulation speed against some sample bots until they get used to it. Then they face DrussGT. Thought you'd want to know that you've caused some laughter and groans of frustration from some prospective high school coders!
Brilliant. I've always found the sample.Interactive very difficult to control, I don't think I'd stand a chance against DrussGT =) I bet if I set the bullet colour to something more similar to the background it would make it even harder for interactive users >:-D
That's always the kicker is that they have a very very hard time adapting to a top of the line bot like DrussGT or Diamond. I've had students say it's like the bot is reading their mind. Then I drop the bomb that the bot can't see bullets, while the students can. It's a great and impactful "Math is POWERFUL" moment!
Of course, set the sim speed low enough and get a patient non-wasteful gunner, and they will trash DrussGT because they can dance juuust aside of each bullet. But as long as I set the sim speed such to keep them on their toes, it's a rough but educational ride. Fun for spectators too!
I have some ideas about dealing with interactive users - closer range, not letting energy levels get below the enemy, varying colours of dark blue and grey bullets - perhaps that should be something I work on next. I've neglected Robocode and I've been working on more pure ML/AI problems instead, but this is something more in the behavioural side which AFAIK hasn't been done yet.
The sample bot Interactive is hard to control. For 1v1, all you would really have to change in response to what you see is orbit direction, distancing, current aiming GF, and bulletpower/when to fire. Everything else could be automatic 99+% of the time.
Would anyone be interested in a SuperInteractive wiki collaboration? Perhaps a challenge for driving it against DrussGT?
I was thinking of a fairly simple "SuperInteractive" which does regular wave-surfing, but also allows you to click on enemy bullets, which it will then dodge. Targeting, I feel, would be stronger without any human intervention.
It looks like there are a lot of megabots in the minirumble right now. Has anyone else noticed this, or is it just a problem on my end?
It seems like a bug in code size detection. Since I am the one who contributed most of the battles, the bug must be on my end. I have no idea what caused it but this phenomena was noticed before once rumble switched to accept robocode with versions 1.9.[0,1,2].
I run a stock client with no modifications, but I do not much about code size detection by the rumble.
I think it is a bug with the rumble client in 1.9.x. Unfortunately I am really busy right now, but if anybody wants to submit a patch to Fnl on SourceForge I'll be happy to include the new version in allowed clients.
I just looked at the roborumble page and there is something strange. How come that jk.mega.DrussGT 3.1.3 has more than 100% of PWIN?
I looked on it now and it has exactly 100% PWIN. But what is really interesting about DrussGT's score, is that it has some extremely high scores against some very good bots, like for example 88% against Phoenix and even 99.82% APS against Hydra!? I know that Druss is very good, but >99% against the #12 bot Hydra is still extremely much in my opinion.
I've been getting Server Error whenever trying to see details or do a comparison with game=roborumble. It works fine for other game types.
I think roborumble just do it more often. I notice the correlation with stats upload time. At least right around when my client uploads data it gives this error. May be CPU cycles taken for stats recalculation.
I've just fixed a bug where removing a bot would corrupt the rankings list (it was still storing it in an old format - gah!) until another battle was uploaded and processed, which saved it in the new format. This should fix the problem.
One more thing I noticed, once I retire a bot. I see them both new and old one in rating for quite a while. Is it related to the fixed bug?
That's a longstanding-ish normal thing. IIRC, to keep clients from fighting over removing/readding bots, I think the server doesn't remove bots until a certain amount of time since the last upload for that bot.
LiteRumble doesn't do that, the moment a client requests a bot to be removed it removes it. However, it keeps all the pairing data against it for 365 days in case it is re-added so that battles don't have to be re-run.
I'm guessing the delay was due to a client taking a while to re-download the participants page.
I actually don't think that's what the issue was, I've fixed another bug in the priority battles generation which will give more weight to bots that are missing more pairings. I've also got a lot more debugging so I can see what is happening if this pops up again.
It would be nice to have some more links on the LiteRumble landing page to general info about Robocode and RoboRumble. Even just robocode.sourceforge.net and RoboRumble wiki page would help a lot imo. A few times I've found myself wanting to mention the RoboRumble to an outsider (like just now) and I sometimes have to provide multiple links, or if I'm just providing one, I use robowiki.net/?RoboRumble. I'd rather provide literumble.appspot.com, but it basically assumes familiarity with Robocode / RoboRumble.
Good idea. I'll see if I can add something over the next few days, perhaps a link to the RoboWiki RoboRumble page and to the robocode.sf.net project homepage.
A while ago I made a change with the priority battles to do a global search for bots that didn't have full pairings, instead of a descent towards lowest by following the lowest bot of the current processed pairing. It really helped with making sure that all pairings filled out ASAP. I've now added something similar to battle count, so only bots with a battle count within 10% of the lowest will get priority battles once pairings are full. It is already making a difference in directing priority battles to more recently added bots.
Next to add is a "lowest APS against enemy" column to rating details.
Good to hear, thanks Skilgannon. I actually thought it had already been doing that for some time.
It did have priority battles based on battle count, but it was a 'gradient descent' method, so it gives a priority battle to the bot in the uploaded pairing with less battles, and eventually it will 'descend' to the bot with the least battles. The new method goes directly to the bots with lowest battles: if the currently uploaded bot is one of the 'priority bots' it intelligently selects which pairing to give as priority weighted towards pairings with lower battle counts, but the new behaviour is if the currently uploaded bot isn't one of the priority bots, and now instead of giving a priority battle to the currently uploaded bot with less battles it gives a random pairing to one of the 'priority bots'.
I've noticed that there are currently 4 bots in the roborumble which seem sort of stuck at around 950 pairings. I'm running my client, and it appears to just be running random pairings; it's not running pairings for any of those 4 bots. I do have a number of bots that my client won't run due to the "major.minor" version issue, but it's only about 30 bots, and there are will over 100 pairings missing from each of the 4 bots without full pairings, so that doesn't quite explain it. Might be worth looking into.
I'm not quite sure what was happening, I think it might have been due to changing code without changing version number. But I've re-deployed the code with a new version number and it seems to be fixed now, those bots are getting priority battles again.
Do you have a list of what 4 bots those were? I'm wondering because robocode-archive.strangeautomata.com doesn't automatically update when the same version number already exists.
eem.EvBot v4.4.5, xander.cat.XanderCat 12.7, zezinho.QuerMePegarKKKK 1.0, EH.nano.NightBird M. EvBot was the most recent release. My client was running quite a few battles for EvBot initially (when it only had about 400 pairings) but when it got up to the 900's, suddenly it was just running random pairings of all bots despite the 4 being short roughtly 150 pairings each. It appears as though all 4 of those bots are now getting priority again, as they have each picked up at least 50 pairings since this morning.
Appengine code, not bot code :-)
I´d like to use the preconfigured client explained here(http://robowiki.net/wiki/LiteRumble) , but it gives me a 404 Error while trying to download this. (https://dl.dropboxusercontent.com/u/4066735/literumble-template.zip).For sure , I could use the BitBucket page (https://bitbucket.org/jkflying/literumble/src) ,but I would appreciate it very much to use this comfortable feature.
Thank you for telling me, the link is fixed.
Thank you, now it is working fine.
Robocode is a game for teaching people how to program as well as a great way for experienced programmers to test their knowledge and skills. The wiki is pretty nice and user friendly. But both the old and the new rumble pages are entirely unfriendly. It would be good to:
- On the landing page, describe what the bot classes are, or at least link to the wiki explaining the bot classes & types of rumble.
- On the rankings explain what the columns are at the top of the page:
- WTF is : APS, PWIN, ANPP, Vote, Survival etc? Its not exactly what I would call noob friendly.
Any other suggestions for improving friendliness of the rumble pages? In the same manner as bot authors can set up their flag, how about allowing them to also set up a link to their bots robowiki page? Then when you click on bot details in the rankings, the bot's page has a "Bot Details on WIki" link. Might be neat.
I know its more work for you chaps to implement, this is a friendly suggestion list. I think you are doing a great job of it at the moment! :)
Explaining the different scoring systems is something I've been meaning to do for a while, so absolutely. I was actually thinking of doing it as mouse-over text on the rankings page, although maybe a separate page would be better? As for explaining bot classes, I'd rather keep the server entirely free of any sort of class-specific data, and leave everything up to the client configuration.
As for the back-to-wiki links in the bot details, how about a link that searches for the bot name on the wiki? That would minimise the amount of admin, and would just involve adding a bit of HTML to the BotDetails page.
I think it would be nice to have one or more new columns on the Bot details table which would tell whether a certain bot "Voted" for the bot currently being viewed, and whether the bot currently being viewed "Voted" for a certain bot. It would also be nice to see whom else a certain bot voted for, if the vote was split multiple ways.
Thanks
You can infer that from the NPP score. If you got 100NPP against them, then they voted for you, if they got 100NPP against you then you voted for them. NPP isn't symmetric so you their 100NPP doesn't necessarily align with your lowest NPP, but you can just check your lowest APS score, and it will tell you who you voted for (multiple if there were ties).
Hi mates
I try to bring my robocode environment back to work and would have some questions and maybe some issues.
Are there some changes in the rumble client for 1.8.1.0, I should know about? Beside the increased upload speed, I mean? Because melee battles are not canceled if one/more bots couldn't loaded. Shame on me - I'm still on java 1.6 and a couple of bots are 1.7 written. I try to exclude all 1.7 bots right now but not sure if I get them all. In 1vs1 I get an 'could not load' error for 'apv.TheBrainPi_0.5fix.jar' which looks perfectly fine, link and package wise, so far :(.
Something different: I have to admit that I do not understand how to use the new scoring columns. I know what they do (maybe not fully), but don't get how to interpret the values. For example NPP. If I have, lets say, 88% against an particular bot - where do I have to look and what should I do to increase this number (or better what is wrong and then find a way to make it better)? Is there a way to spot bugs/uncertainties from the scoring columns? I remember looking at the battle table against each robot quite often to see if I loose one or more rounds here and there which brought me to a couple of bugs/mistakes I made.
I'm not sure what's up with the 1.8.1.0 issues, what version of the JVM are you using? OpenJDK or Oracle?
As for the scoring columns, I added a link on the Literumble homepage which explains how the scores are calculated. NPP is your APS score normalised against the min and max score against this bot, so if you got 88% then you are close the the maximum score achieved, 0 would mean you got the lowest score and 100 the highest score. So somebody gets 0 NPP and somebody gets 100 NPP against every bot.
When looking for bugs I normally look at the lowest KNNPBI score, or compare to previous versions to see the biggest drop in score. However, in melee this is harder because the score in each battle obviously also depends on who else was in the battle at the time. I'm thinking of adding a 'variance' column, which should help point to bots which cause bugs to happen only some of the time, but I'm very busy right now so if I do that it will only be in late August (I'm in the final writing stage for my MSc!)
Yes I saw the score description page and it tells very good where the numbers come from - but unfortunately (for me) not enough how to interpret or use it. I think I have to get used to it with a little more observations how the numbers change. My guess for the NPP was, if I get 88% there are bots who perform better then I (score wise) against this bot. But how can I find the bots that are better than me? KNNPBI is great to spot the bots that I have trouble in general with but I don't see how I could use the numbers to see that I loose lets say 1-2 rounds (rounds not battles) every now and then which normally tells me that I have a bug for very specific situations (trapped in the corner at round start for example). Is the K value calculated for general or for every class new? Yes you are right in melee you get a wider spread in score for each bot because you can't tell what other bots are on the field and it would be nice to see the range or something.
But by all means take your time and concentrate on your MSc - good luck with it! I donated a little bit to help you with the maintenance costs and hope you can use it.
Take care
Edit: argh - forget about the NPP question I found it right after I wrote this. I just have to look at the bots score table and see who is better than me. Well I feel a little stupid right now :)
Thank you for the donation =) Each rumble is completely independent, only the client knows that minirumble is related to nanorumble, microrumble and roborumble more than say, gigarumble. So the K value is dependant on the rumble that the scores are from, for example if you are looking at Yatagan scores in the nanorumble then K will be calculated from the nanorumble size, if you look at the Yatagan scores in the roborumble K will be larger.
I'm not sure how to do exactly what you are asking without storing every single pairing, which gets far too complicated (and expensive) with the design I have now. Checking the Survival will tell you how many battles you win/lose, but again that is only an average. I am thinking of keeping a Variance score for both APS and Survival (this will be easy to calculate incrementally just like I do the APS for a pairing), and from the variance I can also calculate Standard Deviation and Confidence Interval as I render the page.
About the NPP question, yes you have it, it is a problem that requires the entire score matrix to answer, so you need to check the scores on the other bot's page =)
Thanks for the good wishes for the MSc. Right now it just feels like lots of hard, boring work!
Are you sure it's a change of behavior about the Java 7 bots? I would expect the battle to still run and those bots to get zero scores. I've removed some bots from the participants list for requiring Java 7 and asked people to update with Java 6 compatibility - we shouldn't be requiring Java 7 yet. A more prominent notice about that somewhere might be nice. (Excluding them is also fine of course.)
I remember TheBrainPi could get into a broken state with his save data. I think that's part of what we "fixed" but maybe it's still possible? Could you delete him from the .data dir, or was this on a fresh install already?
I'm not sure if it is a change, but I remember I had this before and the missing bots where just replaced with another bot from the list. Hmm not sure if all bots get zero score. I just saw the battle starts and takes his time to finish and the upload starts like normal (I will check this). And you get a message on battle start 'bot ... could not loaded'. There are quite a lot bots written with java 7 so my guess was it is standard now.
Yes it's a new install from the scratch with all bots loaded (empty robots directory). I had to fix some broken links in the participants files. So - yep fresh install.
I'm on Oracle 1.6 for all my systems.
First page |
Previous page |
Next page |
Last page |