View source for Talk:LiteRumble
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Starting your own LiteRumble | 10 | 05:31, 19 November 2014 |
Minirumble | 2 | 20:43, 4 August 2014 |
strange stats for jk.mega.DrussGT 3.1.3 | 3 | 03:36, 20 April 2014 |
Roborumble Bot Details Gives Server Error | 7 | 15:19, 21 November 2013 |
more informative landing page | 1 | 22:25, 19 November 2013 |
Better Priority Battles | 8 | 18:29, 18 November 2013 |
LiteRumble preconfigured client - Error (404) | 2 | 22:15, 24 October 2013 |
Clarity & Other suggestions | 4 | 15:49, 29 June 2013 |
Gigarumble with missing pairings | 0 | 22:31, 26 June 2013 |
Rumble questions and issues ... | 5 | 21:56, 16 June 2013 |
Score Distribution | 6 | 14:37, 2 June 2013 |
Queue full | 5 | 21:36, 31 May 2013 |
Bad uploads | 3 | 16:01, 31 May 2013 |
KNN PBI | 9 | 12:01, 30 May 2013 |
Backlog | 0 | 11:33, 20 May 2013 |
Put_Your_Name_Here | 1 | 13:44, 21 April 2013 |
Rerun of Pairings | 19 | 15:03, 6 April 2013 |
Bot pairing vs itself | 5 | 11:08, 6 April 2013 |
LiteRumble Statistics | 6 | 21:48, 5 April 2013 |
Awesome job ... | 3 | 21:36, 1 April 2013 |
First page |
Previous page |
Next page |
Last page |
Does anyone have some advice for starting up a custom and/or private LiteRumble? I've got a new batch of programming students that I'm leading through Robocode and I'd love to run a custom bracket with just my kids in it as I've done in years past.
Sure, it's easy enough.
- Create your own app on Google AppEngine
- Download and extract the code from bitbucket
- Change the app name in app.yaml to the name of the app you created
- Download and install the Google AppEngine python SDK
- Run the following in the code directory:
appcfg.py update . && appcfg.py update batchratings.yaml
- This should give you an empty LiteRumble instance running on your app
Once you have a copy of LiteRumble running, all you need to do is modify the rumble client in roborumble.txt
to point to your new server for uploads. You also need a new participants list, which you can host on appengine too if you don't mind continually re-deploying, or you can make a wiki page somewhere. The client just parses everything between the two <pre> tags.
Have fun!
Excellent. I can just host participants on a Dropbox text file. Thanks for the info!
By the way, a favorite thing I do when introducing my kids to Robocode is to have a pair of them (driver and gunner) pilot sample.Interactive at a moderate simulation speed against some sample bots until they get used to it. Then they face DrussGT. Thought you'd want to know that you've caused some laughter and groans of frustration from some prospective high school coders!
Brilliant. I've always found the sample.Interactive very difficult to control, I don't think I'd stand a chance against DrussGT =) I bet if I set the bullet colour to something more similar to the background it would make it even harder for interactive users >:-D
That's always the kicker is that they have a very very hard time adapting to a top of the line bot like DrussGT or Diamond. I've had students say it's like the bot is reading their mind. Then I drop the bomb that the bot can't see bullets, while the students can. It's a great and impactful "Math is POWERFUL" moment!
Of course, set the sim speed low enough and get a patient non-wasteful gunner, and they will trash DrussGT because they can dance juuust aside of each bullet. But as long as I set the sim speed such to keep them on their toes, it's a rough but educational ride. Fun for spectators too!
I have some ideas about dealing with interactive users - closer range, not letting energy levels get below the enemy, varying colours of dark blue and grey bullets - perhaps that should be something I work on next. I've neglected Robocode and I've been working on more pure ML/AI problems instead, but this is something more in the behavioural side which AFAIK hasn't been done yet.
The sample bot Interactive is hard to control. For 1v1, all you would really have to change in response to what you see is orbit direction, distancing, current aiming GF, and bulletpower/when to fire. Everything else could be automatic 99+% of the time.
Would anyone be interested in a SuperInteractive wiki collaboration? Perhaps a challenge for driving it against DrussGT?
I was thinking of a fairly simple "SuperInteractive" which does regular wave-surfing, but also allows you to click on enemy bullets, which it will then dodge. Targeting, I feel, would be stronger without any human intervention.
It looks like there are a lot of megabots in the minirumble right now. Has anyone else noticed this, or is it just a problem on my end?
It seems like a bug in code size detection. Since I am the one who contributed most of the battles, the bug must be on my end. I have no idea what caused it but this phenomena was noticed before once rumble switched to accept robocode with versions 1.9.[0,1,2].
I run a stock client with no modifications, but I do not much about code size detection by the rumble.
I think it is a bug with the rumble client in 1.9.x. Unfortunately I am really busy right now, but if anybody wants to submit a patch to Fnl on SourceForge I'll be happy to include the new version in allowed clients.
I just looked at the roborumble page and there is something strange. How come that jk.mega.DrussGT 3.1.3 has more than 100% of PWIN?
I looked on it now and it has exactly 100% PWIN. But what is really interesting about DrussGT's score, is that it has some extremely high scores against some very good bots, like for example 88% against Phoenix and even 99.82% APS against Hydra!? I know that Druss is very good, but >99% against the #12 bot Hydra is still extremely much in my opinion.
I've been getting Server Error whenever trying to see details or do a comparison with game=roborumble. It works fine for other game types.
I think roborumble just do it more often. I notice the correlation with stats upload time. At least right around when my client uploads data it gives this error. May be CPU cycles taken for stats recalculation.
I've just fixed a bug where removing a bot would corrupt the rankings list (it was still storing it in an old format - gah!) until another battle was uploaded and processed, which saved it in the new format. This should fix the problem.
One more thing I noticed, once I retire a bot. I see them both new and old one in rating for quite a while. Is it related to the fixed bug?
That's a longstanding-ish normal thing. IIRC, to keep clients from fighting over removing/readding bots, I think the server doesn't remove bots until a certain amount of time since the last upload for that bot.
LiteRumble doesn't do that, the moment a client requests a bot to be removed it removes it. However, it keeps all the pairing data against it for 365 days in case it is re-added so that battles don't have to be re-run.
I'm guessing the delay was due to a client taking a while to re-download the participants page.
I actually don't think that's what the issue was, I've fixed another bug in the priority battles generation which will give more weight to bots that are missing more pairings. I've also got a lot more debugging so I can see what is happening if this pops up again.
It would be nice to have some more links on the LiteRumble landing page to general info about Robocode and RoboRumble. Even just robocode.sourceforge.net and RoboRumble wiki page would help a lot imo. A few times I've found myself wanting to mention the RoboRumble to an outsider (like just now) and I sometimes have to provide multiple links, or if I'm just providing one, I use robowiki.net/?RoboRumble. I'd rather provide literumble.appspot.com, but it basically assumes familiarity with Robocode / RoboRumble.
Good idea. I'll see if I can add something over the next few days, perhaps a link to the RoboWiki RoboRumble page and to the robocode.sf.net project homepage.
A while ago I made a change with the priority battles to do a global search for bots that didn't have full pairings, instead of a descent towards lowest by following the lowest bot of the current processed pairing. It really helped with making sure that all pairings filled out ASAP. I've now added something similar to battle count, so only bots with a battle count within 10% of the lowest will get priority battles once pairings are full. It is already making a difference in directing priority battles to more recently added bots.
Next to add is a "lowest APS against enemy" column to rating details.
Good to hear, thanks Skilgannon. I actually thought it had already been doing that for some time.
It did have priority battles based on battle count, but it was a 'gradient descent' method, so it gives a priority battle to the bot in the uploaded pairing with less battles, and eventually it will 'descend' to the bot with the least battles. The new method goes directly to the bots with lowest battles: if the currently uploaded bot is one of the 'priority bots' it intelligently selects which pairing to give as priority weighted towards pairings with lower battle counts, but the new behaviour is if the currently uploaded bot isn't one of the priority bots, and now instead of giving a priority battle to the currently uploaded bot with less battles it gives a random pairing to one of the 'priority bots'.
I've noticed that there are currently 4 bots in the roborumble which seem sort of stuck at around 950 pairings. I'm running my client, and it appears to just be running random pairings; it's not running pairings for any of those 4 bots. I do have a number of bots that my client won't run due to the "major.minor" version issue, but it's only about 30 bots, and there are will over 100 pairings missing from each of the 4 bots without full pairings, so that doesn't quite explain it. Might be worth looking into.
I'm not quite sure what was happening, I think it might have been due to changing code without changing version number. But I've re-deployed the code with a new version number and it seems to be fixed now, those bots are getting priority battles again.
Do you have a list of what 4 bots those were? I'm wondering because robocode-archive.strangeautomata.com doesn't automatically update when the same version number already exists.
eem.EvBot v4.4.5, xander.cat.XanderCat 12.7, zezinho.QuerMePegarKKKK 1.0, EH.nano.NightBird M. EvBot was the most recent release. My client was running quite a few battles for EvBot initially (when it only had about 400 pairings) but when it got up to the 900's, suddenly it was just running random pairings of all bots despite the 4 being short roughtly 150 pairings each. It appears as though all 4 of those bots are now getting priority again, as they have each picked up at least 50 pairings since this morning.
Appengine code, not bot code :-)
I´d like to use the preconfigured client explained here(http://robowiki.net/wiki/LiteRumble) , but it gives me a 404 Error while trying to download this. (https://dl.dropboxusercontent.com/u/4066735/literumble-template.zip).For sure , I could use the BitBucket page (https://bitbucket.org/jkflying/literumble/src) ,but I would appreciate it very much to use this comfortable feature.
Thank you for telling me, the link is fixed.
Thank you, now it is working fine.
Robocode is a game for teaching people how to program as well as a great way for experienced programmers to test their knowledge and skills. The wiki is pretty nice and user friendly. But both the old and the new rumble pages are entirely unfriendly. It would be good to:
- On the landing page, describe what the bot classes are, or at least link to the wiki explaining the bot classes & types of rumble.
- On the rankings explain what the columns are at the top of the page:
- WTF is : APS, PWIN, ANPP, Vote, Survival etc? Its not exactly what I would call noob friendly.
Any other suggestions for improving friendliness of the rumble pages? In the same manner as bot authors can set up their flag, how about allowing them to also set up a link to their bots robowiki page? Then when you click on bot details in the rankings, the bot's page has a "Bot Details on WIki" link. Might be neat.
I know its more work for you chaps to implement, this is a friendly suggestion list. I think you are doing a great job of it at the moment! :)
Explaining the different scoring systems is something I've been meaning to do for a while, so absolutely. I was actually thinking of doing it as mouse-over text on the rankings page, although maybe a separate page would be better? As for explaining bot classes, I'd rather keep the server entirely free of any sort of class-specific data, and leave everything up to the client configuration.
As for the back-to-wiki links in the bot details, how about a link that searches for the bot name on the wiki? That would minimise the amount of admin, and would just involve adding a bit of HTML to the BotDetails page.
I think it would be nice to have one or more new columns on the Bot details table which would tell whether a certain bot "Voted" for the bot currently being viewed, and whether the bot currently being viewed "Voted" for a certain bot. It would also be nice to see whom else a certain bot voted for, if the vote was split multiple ways.
Thanks
You can infer that from the NPP score. If you got 100NPP against them, then they voted for you, if they got 100NPP against you then you voted for them. NPP isn't symmetric so you their 100NPP doesn't necessarily align with your lowest NPP, but you can just check your lowest APS score, and it will tell you who you voted for (multiple if there were ties).
Hi mates
I try to bring my robocode environment back to work and would have some questions and maybe some issues.
Are there some changes in the rumble client for 1.8.1.0, I should know about? Beside the increased upload speed, I mean? Because melee battles are not canceled if one/more bots couldn't loaded. Shame on me - I'm still on java 1.6 and a couple of bots are 1.7 written. I try to exclude all 1.7 bots right now but not sure if I get them all. In 1vs1 I get an 'could not load' error for 'apv.TheBrainPi_0.5fix.jar' which looks perfectly fine, link and package wise, so far :(.
Something different: I have to admit that I do not understand how to use the new scoring columns. I know what they do (maybe not fully), but don't get how to interpret the values. For example NPP. If I have, lets say, 88% against an particular bot - where do I have to look and what should I do to increase this number (or better what is wrong and then find a way to make it better)? Is there a way to spot bugs/uncertainties from the scoring columns? I remember looking at the battle table against each robot quite often to see if I loose one or more rounds here and there which brought me to a couple of bugs/mistakes I made.
I'm not sure what's up with the 1.8.1.0 issues, what version of the JVM are you using? OpenJDK or Oracle?
As for the scoring columns, I added a link on the Literumble homepage which explains how the scores are calculated. NPP is your APS score normalised against the min and max score against this bot, so if you got 88% then you are close the the maximum score achieved, 0 would mean you got the lowest score and 100 the highest score. So somebody gets 0 NPP and somebody gets 100 NPP against every bot.
When looking for bugs I normally look at the lowest KNNPBI score, or compare to previous versions to see the biggest drop in score. However, in melee this is harder because the score in each battle obviously also depends on who else was in the battle at the time. I'm thinking of adding a 'variance' column, which should help point to bots which cause bugs to happen only some of the time, but I'm very busy right now so if I do that it will only be in late August (I'm in the final writing stage for my MSc!)
Yes I saw the score description page and it tells very good where the numbers come from - but unfortunately (for me) not enough how to interpret or use it. I think I have to get used to it with a little more observations how the numbers change. My guess for the NPP was, if I get 88% there are bots who perform better then I (score wise) against this bot. But how can I find the bots that are better than me? KNNPBI is great to spot the bots that I have trouble in general with but I don't see how I could use the numbers to see that I loose lets say 1-2 rounds (rounds not battles) every now and then which normally tells me that I have a bug for very specific situations (trapped in the corner at round start for example). Is the K value calculated for general or for every class new? Yes you are right in melee you get a wider spread in score for each bot because you can't tell what other bots are on the field and it would be nice to see the range or something.
But by all means take your time and concentrate on your MSc - good luck with it! I donated a little bit to help you with the maintenance costs and hope you can use it.
Take care
Edit: argh - forget about the NPP question I found it right after I wrote this. I just have to look at the bots score table and see who is better than me. Well I feel a little stupid right now :)
Thank you for the donation =) Each rumble is completely independent, only the client knows that minirumble is related to nanorumble, microrumble and roborumble more than say, gigarumble. So the K value is dependant on the rumble that the scores are from, for example if you are looking at Yatagan scores in the nanorumble then K will be calculated from the nanorumble size, if you look at the Yatagan scores in the roborumble K will be larger.
I'm not sure how to do exactly what you are asking without storing every single pairing, which gets far too complicated (and expensive) with the design I have now. Checking the Survival will tell you how many battles you win/lose, but again that is only an average. I am thinking of keeping a Variance score for both APS and Survival (this will be easy to calculate incrementally just like I do the APS for a pairing), and from the variance I can also calculate Standard Deviation and Confidence Interval as I render the page.
About the NPP question, yes you have it, it is a problem that requires the entire score matrix to answer, so you need to check the scores on the other bot's page =)
Thanks for the good wishes for the MSc. Right now it just feels like lots of hard, boring work!
Are you sure it's a change of behavior about the Java 7 bots? I would expect the battle to still run and those bots to get zero scores. I've removed some bots from the participants list for requiring Java 7 and asked people to update with Java 6 compatibility - we shouldn't be requiring Java 7 yet. A more prominent notice about that somewhere might be nice. (Excluding them is also fine of course.)
I remember TheBrainPi could get into a broken state with his save data. I think that's part of what we "fixed" but maybe it's still possible? Could you delete him from the .data dir, or was this on a fresh install already?
I'm not sure if it is a change, but I remember I had this before and the missing bots where just replaced with another bot from the list. Hmm not sure if all bots get zero score. I just saw the battle starts and takes his time to finish and the upload starts like normal (I will check this). And you get a message on battle start 'bot ... could not loaded'. There are quite a lot bots written with java 7 so my guess was it is standard now.
Yes it's a new install from the scratch with all bots loaded (empty robots directory). I had to fix some broken links in the participants files. So - yep fresh install.
I'm on Oracle 1.6 for all my systems.
Can someone describe how to read the fancy new Score Distribution graph? What is the X axis? What is the Y axis? What do the different colored dots represent?
There is a caption under the diagram describing X and Y. Mouse over the image to see what the colors mean.
Basically X, opponent strength, Y is how good your robot is against that opponent.
Basically, the X axis is the score that each particular component got in the rumble, while the Y is the score you got against them. Right now I have red = Opponent APS vs Pairing APS, green = Opponent Survival vs Pairing Survival and blue = Opponent APS vs.(KNNPBI+50). Each pixel colored in represents at least one pairing with the score at that location. Both axes go from 0 at the origin to 100 at the top and right edges of the picture.
I'm thinking of changing green to Opponent APS vs Pairing Survival, just so that the X axis is always Opponent APS. Any thoughts?
Can I ask a question over here? If I look at my or at any "bot detail page" (like http://literumble.appspot.com/BotDetails?game=roborumble&name=mae.Mae1%201.1 ) I see many abbreviations ,for example APS,NPP and KNNPBI.Is there a page in this wiki or somewhere else where these words are explained ,because my English is not so good that I could deduce the meaning myself. Thank you very much.
Some of them are listed at LiteRumble.
The important one is APS, as that is the primary ranking. APS is average percentage score.
For each opponent your percent score is 100% * <your score> / (<your score> + <opponent score>).
APS is this value averaged over all the battles/opponents for your bot, so it is important not just to win, but to win by the largest margin possible.
PWIN is percentage of wins.
I have added a page on the Literumble here (and a link to it from the main page) which provides better explanations of what the different scores are.
If you have any questions, or think that they need to be clarified, just ask.
Thank you very much.
When the server queue is full, LiteRumble is returning an OK, making the client discard the uploaded battle.
IMHO it would be better to return an ERROR instead, telling the client to keep the battle and retry the upload later.
I wouldn't mind doing that if the client would do a delay for 10 seconds or something before trying again after an error. Right now it just retries the battles that failed each iteration (along with the new ones) and this quickly leads to all clients just trying to upload at full speed the whole time, which will put too much load on the server.
If we wanted to change the client protocol so that the client had a delay when this happened, I wouldn't have a problem. However, another issue is that the priority battles get delayed then, which means that a) the bots that need priority battles end up getting too many once the queue is run and b) new bots take a long time to enter the rumble. So, optimal client behaviour would be if the server returns OK. QUEUE FULL. then the client should wait 10 - 30 seconds then retry uploading the same pairing to the server.
PS. the new rumble structure lends itself well to bulk upload strategies. If you want to write a bulk upload protocol I would be happy to look it over and implement it on the server.
I´m writing a custom client right now (slowly writing it from scratch in the last months). Yesterday I managed to make it fully functional, although it still needs polishing (make hardcoded behaviour more configurable). I´ll make it available here after it becomes more stable.
I can add bulk upload, but it will break compatibility with the current protocol. Unless both clients and servers support 2 protocols at the same time.
Features I managed to include in this custom client so far:
- Full compatibility with the current protocol. (I hope that underscore bug was the last one)
- Multiprocess/multithread support.
- Parallel downloading of JARs. (currently hardcoded at 15 simultaneous downloads)
- Processing battles in parallel while uploading results in a separate process. (currently hardcoded at 1 simultaneous upload)
- Abusing the Java 5 concurrent API to keep the code readable in the presence of parallelism.
- Upload throttling in case of errors (currently hardcoded at 10 seconds delay after each error). It is possible to throttle uploads in the absence of errors too, although I wasn´t planning to do that.
- Smarter handling of priority battles. One big pairing matrix handles priority battles, new competitors and competitors with low battle count, all at the same time. And it is independent from iterations (which I eliminated).
- Communication between processes through the network, allowing clients spread accross a LAN. (currently hardcoded at "localhost" address and 1099 port only)
- Automatic copying of JARs between clients. If a single "server" process has all JARs, no client needs to download from internet.
- Logging support. No more System.out.println. You can configure how messages appear in the console (or in a file), adding for example, time and severity.
- Internally, battle parameters are all dynamic. Parameters like number of competitors, inactivity time, gun heat cooldown, codesize classes, hideRobotNames, are all concentrated in a configuration class. The idea is to put them all in configuration files and make divisions like twin duel, team melee or anything else fully supported.
Neat! Sounds like a lot of work. What's the setup like for multi-process battles? And is it the same mechanism locally vs clients across a network?
There is a "server" process and multiple "worker" processes. You start the server process by calling server.cmd. And start each worker process by calling worker.cmd. Each one runs in a separate JVM and needs its own Robocode installation. This way each process runs in a separate window and you can see what each is doing.
All communication to LiteRumble is done by the server process alone. Server and workers communicate through RMI.
Server process is currently using the same configuration file of the official client. Worker processes are currently 100% hardcoded, but server address/port and robocode home could be configured.
Server process downloads participants list, ratings, download JARs (in a separate "jar" thread-pool), calculate codesize, remove old participants and generate a local participants list. They run in a "download" single-thread pool (except jars). Participants list and battle count are sent to a "battle generation" thread-pool, which is single-threaded.
Worker processes connect to the server and requests a battle, which is generated on-the-fly by the server in the "battle generation" thread pool. Then the worker runs the battle and sends the result back to the server. Worker processes are mono-threaded (except for threads internal to Robocode).
Server receives the result, splits it in codesize classes and sends them to an "upload" thread pool, which is currently single-threaded.
In the "upload" thread pool, results are uploaded to LiteRumble. Battle count and priority battles are downloaded and sent to the "battle generation" thread pool. If workers flood the "upload" thread pool with results, upload requests are kept in a queue, and are uploaded one at a time.
In the battle generator, participants list, battle count and priority battles are grouped and used to generate a smart battle whenever a worker requests. All battle generation logic is kept in a single class, in a single thread, making it easy to customize.
The result is you see battles going non-stop on workers, and uploads going almost non-stop in the server process, one at a time. Makes a huge difference in melee.
While testing a custom client I screwed up while uploading results to LiteRumble. Now there are a few bots with wrong names on the server (underlines instead of spaces):
bvh.mini.Wodan_0.50
taqho.taqbot_1.0
eat.HumblePieLite_1.0
ntw.Sighup_1.5
jcs.Megatron_1.2
Polkwane.Intensive_1.0
Legend.Biogon_1.5
gwah.GerryBotMkII_1.5.1
cw.megas.Silhouette_1.1
el.JumpShoot_0.2
ags.Midboss_1q.fast
Polkwane.Piyane_0.7b
bjl.LoneDragon_0.5
pez.mini.VertiLeach_0.4.0
vft.Valkyrie_1.0
japs.Sjonniebot_0.9.1
pmc.SniperBot_1.0
supersample.SuperTracker_1.0
pez.micro.Aristocles_0.3.7
timmit.nano.TimDog_0.33
oog.nano.Caligula_1.15
And the API doesn't let me delete them. It translates underlines to spaces on delete requests making these names inaccessible.
I've changed the API so that it doesn't filter like that in the delete, but does in the upload. I'm not sure if they'll be automatically removed now, or if the client can't handle the underscores.
The client couldn't do it, so I quickly scripted something to remove them from robo/mini/micro/nano. It should be fine now.
The client can´t automatically distinguish names with underscores and with spaces when removing old participants. It´s because the rating list downloaded from the rumble server uses underscores, and the list downloaded from the wiki uses spaces.
...and remove requests use underscores ...and upload requests use spaces ...and priority battles responses use underscores.
Wow, I'm really excited about this! Finally I have a better idea where to focus my benchmarks. =) [1]
What k are you using? And is DrussGT getting k/2 because there's nobody above him, or still getting k, but all below him?
Sorry I missed this...
Right now I'm using k=sqrt(opponents), and it just chops it off. So the top bot gets k/2, second bot gets (k/2)+1 etc.
I just discovered that my numpy
conversion had broken the KNNPBI completely, so I've fixed that and re-run all of the rumbles. Now that it is using numpy, it should give nice symmetrical results (although KNNPBI isn't really symmetrical, by design, but now they are consistent).
Also, looking at the code, k=sqrt(len(bots))/2
Hi, the KNNPBI doesnt work for DeepThought (http://literumble.appspot.com/BotDetails?game=roborumble&name=cb.DeepThought%201.0)... any idea why?
I'm not sure, I'll look into it this evening. How long has DeepThought been in the rumble?
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:LiteRumble/KNN PBI/reply (7).
It seems that worked! DeepThought now has KNNPBI and NPP scores. If you want to improve your score the quickest, concentrate on these bots =)
I noticed the backlog was ~22hours, which seems to me a bit excessive =) I've added a check to the upload so that if there is a backlog of more than 2 hours it discards new uploads until the backlog drops again. This 'check' is refreshed twice an hour so that it doesn't thrash too much, and also minimises the overhead of checking the backlog (which takes a second or so).
I see that there is an actual listing for "Put_Your_Name_Here", any chance we could get that translated to "Anonymous"?
I see quite a few robots that haven't changed are re-running pairings today as if they had new versions. Any idea why that is?
Looking into it myself. From what I can tell a bunch of battles didn't load into the Batch Rankings, so it assumed that they didn't exist and pulled them from the participants scores list. They've been slowly added back by clients over the last few hours.
I've removed the section of code that removed the battles from the Batch Rankings, but that is just putting a bandaid over the problem. I'll have to look deeper to see what caused them not to load in the first place.
Looks like half of General 1v1 has incomplete pairings now. Should I put my clients into overdrive to fix it, or am I making it worse by running clients because of some bug?
Running a bunch of clients isn't going to make it any worse, from what I can tell it was a once-off problem to do with the backend instance being unable to load data. I've removed the mechanism it used to remove the bots, but I'm still not sure (and may never know) why it happened.
I've also just identified a bottleneck/threadlock which will severely limit the ability to upload from multiple clients at once without increasing upload latency to where it will spawn new server instances and cause my quota to be hit again, but I have a fix for that which I'll implement and test tomorrow. The load right now seems pretty healthy though, I see in the logs uploads from you, MN and Wompi, thanks guys. I'll let you know when you can unleash the full power of your machine(s) =).
It's data isn't removed, so you can still see it in the BotDetails, but the pairings info in the other competitors which points to it is removed. Otherwise over many versions the access to other bots will get slower and slower due to increased serialising costs.
Keeping pairing data for a while can help protect the database against faults in clients removing competitors from the rumble, only to be re-added again some time later.
That sounds reasonable, yes. Perhaps adding a 30 day error window, so only if the last battle was more than 30 days ago the pairing data in the 'alive' bot gets purged. Until then it is just marked as 'removed'. I think this purging and checking will have to happen in the backend, because the frontends are fully loaded right now with your and Voidious's uploads.
The number of bots with not full pairings has gone up - we were under 400 yesterday and back up to 471 now. I noticed an over quota message from last night, was there another loss of data?
Let me know if I should dial back my clients or if there's anything else I can do.
The source of the problem has to be tracked down or the rumble will never stabilize.
I guessed it was the excludes feature from the clients erasing pairing data in the server. But looks like it is something else.
Sorry guys, I was trying to see if I could use the marshal
module to do my serialisation instead of cPickle
because my local testing showed it is about 50% faster, but it corrupted a few bots from each pairings dict so I quickly changed it back. I'm not sure why it had these issues since I tested locally on the dev server and it worked fine, but anyway it is fixed now, and was a completely different issue to what happened before.
It did hit the quota last night, so perhaps tone down the clients a little. There's a threshold below which it is cheap to run, but as the load increases I start leaving the free quota for the instances as well (not just database writes), which gets expensive much more quickly.
Took my clients from 4 down to 2.
Can you protect against us overloading your server? Both to avoid hitting quota, and to avoid someone DDoSing your bank account :-), it seems like it would be good to have some throttling or something in place.
Refilling the pairings is going to take a while. Is it possible to tune things (for now) to support a higher client load, to prioritize overall throughput over losing pairings here and there, while consuming less quota?
I've been trying to think of a good way to do that, but the 'recommended way' using Task Queues (which I can then limit to 3-4 queries a second instead of the 5-6 I was getting yesterday) will break any reasonable way of having priority battles.
Also, there is no way to programatically retrieve the current quota usage stats, which means I can't do any auto-throttling.
I can tune it not to do database writes unless a bot has x or more pending battles, which is how I did it previously when on free tier, but the majority of the time is actually taken up with (de)serialising the pairings data, which is why I was trying to shoehorn in marshal
. I'll add a min pending battles limit, and you can turn those clients back on, we'll see what happens. Of course, it will probably only hit quota tomorrow night if it's an issue, since today has been pretty slow.
I´m thinking in building a custom client which groups results from all local clients and uploads them in a single thread, so the server needs only a single instance per user to receive data.
Combined with multi-threading, clients can keep running battles in parallel while a single thread uploads everything, making it faster than the current client, while at the same time consuming less server resources.
That would be great. I'm not sure how you'd do priority battles though, would you have a local queue which would be filled, and you just take from there? I guess I could sort-of do this with task queues, but it wouldn't be very pretty.
Priority battles are downloaded by the uploader thread after each pairing is uploaded. They would be sent to a queue, which would be consumed by the clients.
Battle results would be sent to another queue, which would be consumed by the uploader thread.
That is the basic idea. You can add some logic inside the queues to make them smarter, like dealing with duplicated battles, excessive amount of data, or lack of data and fallback to random battles.
I've essentially implemented what you've said here but on the server side using a Task Queue, the only thing we lost was on-the-fly updated battle numbers, but those aren't really being used now that we have priority battles. Also, priority battles are delayed by up to 100 pairings per rumble, but this new design should mean that stuff sticks around in local memory longer than before.
Once I add contributor stats I'll also add information about the current amount of queue backlog, so people can decide whether or not to run a client.
If you check your clients you can see that the uploads are going much quicker, and it tells you it is adding it to a queue instead =)
Seems like there's a bug in the TwinDuel rankings: [1]
DuoLedByDroid and TwintelligenceTeam each have battles vs themselves and it's counting as an extra pairing. (I noticed because the archived rankings code determines that rankings are stable by everyone having the same number of pairings.)
OK, I modified the code that checks that pairings are still in the rumble to also check against the bot name. They should be fixed next time they get a result upload.
Dunno if this is the same bug or a different one, but wompi.Kowari 1.4 and ag.Gir 0.99 show 989 pairings in the main rankings, but 988 in the bot details. (Should have 988, there's 989 bots.)
If you refresh your LiteRumble home page, you'll see a link at the bottom which takes you to the all-new LiteRumble Statistics page. This page is updated once an hour, on the hour, and contains the latest contributor information as well as how big the queue is and the expected processing backlog based on how many were processed in the last minute.
I also improved the OverQuotaError handling so that from now on your rumble client will see a regular server response and won't go crazy with the uploading errors. Also, from now on, when the queue is full it doesn't give an OverQuotaError, but instead waits half a second and sends a nice message back to the client saying that the upload was discarded.
Enjoy!
Yes. Because of this it doesn't have pure FIFO behaviour, sometimes new items get run before old ones, but for the most part it is FIFO.
Is the upload queue size the amount of battles which still needs to be processed?
Which means while the queue size is greater than zero, we can stop uploading battles and the server still has work to do?
Exactly. The amount of time it is estimated the work will take is the Queue Delay, but this is only based on the average speed in the last minute so it fluctuates a lot.
Did the queue replace the older cache? No more evicted battles?
Also, I know there is a quota for task queues, of about 100,000 messages/day.
No, the older cache is still there. It serves two purposes, 1) I don't have to write to the datastore every single pairing, which is slow, and 2) often a single bot gets lots of battles all in a row (eg just released) and in this case my database writes will be cut almost in half if I am caching them. There are virtually 0 evicted battles now that I keep just two queues for 1v1/melee, instead of 1 queue per rumble with a minimum of 5 unsaved battles before a bot was written and a minimum of 10 bots needing to be written.
I think that was the old quota for task queues and it has now been increased. In the control panel I see a quota of 1,000,000,000 and I'm less than 1% of that.
Hi mate. I just wanted to thank you for all the effort you put in keeping LiteRumble up and running - it is very much appreciated by me. Awesome job. Of course thanks too Voidious, Sheldor and all the others who spend her time to keep RoboCode alive. Right now I have not much time to help out - I barely could write a couple of bot code even on easter weekend :( . But let me know if I could drop some money or something else to show my respect for your work.
take care
Yeah man, LiteRumble is great! It's pretty remarkable Darkcanuck's server could just disappear one day and we have a viable alternative, with rankings already up to date, to switch to on the spot. And you've been lightning quick in adding polish and fixing stuff since it became the canonical rumble server. Great work, and I too am ready to pitch in for costs whenever you say.
Wompi,
Thanks for the mention. Though I doubt that my contributions to the RoboWiki have done much for Robocode itself.
I look forward to seeing how Kowari turns out.
Skilgannon,
Thanks for saving the RoboRumble. :)
Great job!
Voidious,
I agree. The LiteRumble's awesome!
First page |
Previous page |
Next page |
Last page |