View source for Talk:LiteRumble
- [View source↑]
- [History↑]
Contents
First page |
Previous page |
Next page |
Last page |
The rankings for roborumble, minirumble, and nanorumble experienced a hiccup, causing the rankings to restart from zero. However, all older results were added as soon as one battle was fought. Every bot had to fight every other bot once to get the rankings up-to-date. LiteRumble was able to recover the rankings with only 2000+ battles, thanks to a fix by Xor. However, there was a bug in the rumble client that prioritized already existing bots over bots without a ranking, making it harder to add missing bots back. Xor submitted a fix for the bug. The rankings were eventually stable again, but many pairings only had one battle, sometimes from two years ago. It's unclear if the lost battles can be retrieved.
— ChatGPT
Hi all, seems that for some reason the participants list for roborumble, minirumble and nanorumble had a hickup (not for microrumble ??)
This means that the rankings start again from zero, but luckily all older results get in as soon as one battle has been fought. In order to get the rankings up-to-date again, every bot has to fight every other bot one time.
So, if you have the time and opportunity, please start your rumble client.
It seems less worse than I thought at first sight. As soon as a bot has fought a battle, it appears in the ranking.
When all bots are present, all bots have to fight one battle to get all pairings back. Hopefully LiteRumble is smart enough to get that done in one pass.
That happens once every a few years ;( Maybe LiteRumble should have some check to prevent removing more than 100 bots at a time? Anyway LiteRumble *is* smart enough to recover with only 2000+ battles (1000+ to add back, 1000+ again to fix pairings), instead of 1000000 battles.
However a bug in the rumble client actually prioritized already existing bots over bots without ranking, except in the clean run case. So it actually takes much more battles to add missing bots back ;(
I submitted a PR for this bug. https://github.com/robo-code/robocode/pull/61.
Bots without rank should always take highest priority, in all scenarios.
Bad news. Once a bot is added, the pairings are only updated for newly added battles. e.g. one bot with 400 pairings, gets to 401 when one battle is submitted. The rest of the pairings seems not to add automatically. So this means 1000000 battles are needed. ;(
I submitted a PR to literumble. https://github.com/jkflying/literumble/pull/3 Hopefully this PR will solve the issue, and with 1000+ battles, the ranking should restore stable.
Merged and deployed, thanks. Should we also update Robocode versions?
Great! Anyway I think we could wait for the next release, where the fix for unranked bots are merged ;)
Looks like the ranking isn't recovering as fast as expected, bots with 400 pairings still goes to 401 instead of 1189 after 1 battle. Any idea of this behavior after the PR?
Note that in the 507 still unstable bots of roborumble, 449 are from minirumble, 421 are from microrumble and 228 are from nanorumble. So it seems that bots that also in mini/micro/nano rumble takes longer to get to stable state.
Also note that 391 / 507 unstable bots are last updated since the deploy of the fix.
It is restoring quite fast. At this moment half of the bots in roborumble have all their pairings back. And most of the others have at least 800 pairings.
That may be caused by the batch processing. However, it should be fully recovered if it's caused by batch processing.
It looks like the problem is more serious this time.
Have a look at this bot, a lot of pairings are last battled in 2020, and a lot of pairings have only 1 battle, meaning that some data isn’t recovered. http://literumble.appspot.com/BotDetails?game=roborumble&name=nz.jdc.nano.NeophytePattern%201.1&order=-Battles
Rankings in roborumble, minirumble and nanorumble are stable again, microrumble was stable from the beginning. But as Xor indicated, a lot of pairings have only 1 battle. Sometimes even 1 battle from 2 years ago. Is there something we can do to retrieve the lost battles, or should we continue with the current situation.
Think again, there may not be data loss. For each pairing to have 1 battle, we need 500000 battles, which takes contiguous running of a few months. It's not surprising that it takes 2 years for a round.
Most of the 1 battle pairings are from 2022.3.23 (or 2020.10, didn't remember that though), the exact time the last hiccup happened. This strongly indicates that the data loss happens when a battle is fought for a missing pairing.
I'm noticing that some of LunarTwins' scores in TwinDuel have shifted dramatically from what they were as of RumbleArchives:TwinDuelRumble_20200126, despite the robots involved in said pairings not having been updated since. Particularly versus the following four:
- bvh.two.Valkiries 0.44tmk3b
- bvh.two.Ravens 0.2
- gh.twin.GrauwuarG 0.41
- krillr.mini.JointStrikeForce 2.0c
which are four bots that have been unchanged since 20200126, that LunarTwins used to win decisively against, but appears to no longer do so in the TwinDuel LiteRumble. Also appears those for some reason appear to have had their pairing count versus LunarTwins reset more recently than some others for some unknown reason? Not sure. I'll be looking into it more some time, but it makes me wonder if this was due a change in robocode version.
To update, it seems robocode version 1.9.3.8 through 1.9.4.1 had entirely broken getTeammates/isTeammate, which breaks various TeamRumble/LiteRumble bots, including but surely not limited to LunarTwins.
This bug appears to have been introduced to 1.9.3.8 as a side effect of the fix to this bug.
Version 1.9.4.2 fixes a bug with getTeammates/isTeammate.
So Skilgannon if you're reading this, we should probably update the literumble version to 1.9.4.2, and also clear all TeamRumble/TwinDuel pairing data that was from a client with one of the flawed versions. Given things appear to have went from 1.9.3.5 to 1.9.3.9 in LiteRumble, it looks like it's just the 1.9.3.9 results that need to be cleared from TeamRumble/TwinDuel pairing data. :)
Rumble is updated to only accept 1.9.4.2. I will wipe the Team / TwinDuel, unfortunately they aren't stored by upload version.
So, while updating to 1.9.4.2 fixed a badly broken TeamBot situation, it introduced a new problem that I first noticed with Tron. The precise cause is unclear to me at present and can't debug into a closed bot that isn't giving a stack trace, but some change between Robocode 1.9.4.1 and Robocode 1.9.4.2 appears to have broken bots that load data files that come preloaded in their JAR files. In the case of Tron this is used for a configuration properties file, and being unable to load this is causing Tron to start in challenge/reference mode instead of normal mode.
I'm doubtful this bug only affects Tron, and removing tainted data from the rumble could be troublesome.
Bug report: here
Hi Skilgannon,
Would you mind bumping this thread when you are changing the Allowed Robocode versions?
I run my clients pretty much unattended, so they try to upload rankings and fail. Unfortunately, there is no way to notify a human unless one stares at the console all the time.
But updates in the wiki thread would propagate to my rss reader quite quickly.
Happy New Year and thank for running the LiteRumble.
No problem. I actually only changed it about 2 hours ago, if you didn't notice I would have posted something =) I'm also going to add back those historical bots which were removed because of the compatibility issues. Have a good New Year!
As promised (in 2015), bump, I've upgraded to 1.9.3.5 =)
Hi all, Literumble has moved up to 1.9.3.9 (from 1.9.3.5)
Ah, I forgot about this thread. Yes, I updated the required version so that it will use the HTTPS links instead of HTTP, now that the robowiki supports these.
Literumble is now updated to accept battles only from 1.9.3.5! Happy rumbling!
Robocode 1.9.3.4 is released, with fixed meleerumble pairings and codesize utility (for lambdas), along with other fixes, should we upgrade now and see the changes?
I've updated the accepted client version to 1.9.3.4 =)
1.9.3.4 has -cp option set to wrong value, causing codesize not to work (and pushing mega bots to nano rumble) if its not fixed manually. We should disable this version and wait fnl for a fix...
I made a pull request: https://github.com/robo-code/robocode/pull/14
I've rolled back, let me know when 1.9.3.5 is available with the fix =)
I've seen many bots have KNNPBI; however my bot still have no KNNPBI (all zeros) after a long period of time ;( http://literumble.appspot.com/BotDetails?game=roborumble&name=aaa.SimpleBot%200.022d
So what's required for a bot to have KNNPBI?
Finally after having 978 pairings the KNNPBI is shown. Anyway, still wondering what made KNNPBI to be all zeros.
And in 0.022b the KNNPBI is still all zeros, see http://literumble.appspot.com/BotDetails?game=roborumble&name=aaa.SimpleBot%200.022b
The computation of those values are batched, they are computed every 24 hours. One possible explanation for the older version to be still all zeroes is that maybe only the latest version of a bot is considered when doing the computation? Not sure about this, though. Just makes sense, had a really superficial look at the code. You can probably try to figure it out here: https://bitbucket.org/jkflying/literumble/src/38f6e71de1c6?at=default
KNNPBI (and the other batched rankings, NNP, Vote) are calculated once every 8 hours, since they can't be calculated incrementally. If you remove your bot before it is calculated, it won't be calculated, since it doesn't get recalculated on old bots.
Btw, is that possible for a bot to have vote and ANPP calculated and shown correctly, while having NPP and KNNPBI unavailable?
The link below captures an instance of such weird thing. http://web.archive.org/web/20181001092218/http://literumble.appspot.com/BotDetails?game=roborumble&name=aaa.n.ScalarN%200.011d.147
It could happen sometimes if the saving fails. It should correct itself soon though.
It seems that both 1v1 and melee now shows "Rankings Stable" instead of "Rankings Not stable".
I once thought that "Rankings Not Stable" is hardcoded to show that the rankings are never stable so one should always run more battles.
But today is the first time I noticed "Rankings Stable", quite surprising.
So, what's the mechanism behind "Rankings Stable" and "Rankings Not Stable"? Is "Rankings Stable" displayed whenever every bot gets a full pairings?
Your observation coincide with mine. Once all bots paired with each other at least once, the ranking get the stable status. Sometimes it does not happen for a long time because of missing bots or some bots crashing with a newer version of robocode. This is why the participants list sometimes get pruned.
If the ranking is unstable for a long time, I usually look which bot is missing a pair and search for a reason in the rumble client log.
Usually, stabilization takes about a day for each new bot.
Yeah, Monk gets an incorrect url for nearly half a year, making newly updated bots missing that pairing. And in 1v1 there are more bots having problems with current settings (robocode 1.9.2.5 and Java 8).
Should we have a clean up, or create a new rumble to remove bots having compatibility problem, which only adds noice to the rumble?
I personally oscillating between "if the author does not care, why should I?" and "preserve the history". If you are in the second camp, let me remind about my FixingParticipantLinks script which relinks missing bots to strange automata archive.
What is our problem with Java 8? Do we already have bots with Java 9? Or robocode itself is not backward compatible and you see it on big enough robot pull?
My opinion is that as long as a bot works fine on current settings (robocode 1.9.2.5 and Java 8), we should "preserve the history". But once it produces random result (e.g. crashing half of the time), we should remove it (until the author should fix it).
Bots known to crash on some machines:
apc.Caan 1.0 dam.MogBot 2.9 sgp.JollyNinja 3.53
I've been away for quite some time and I'll probably come back once I graduate. I still care about my bots, though (despite Monk being buggy as hell atm). I used to make use of Drive to provide the links, but I didn't know they would break after some time. What would you guys suggest me to do? Is the solution proposed above (fix script) sufficient for now?
Well, do not trust the modern hype i.e. a cloud. But I guess you already know it.
If you cannot host your bot yourself. Put it in the cloud, usually within a day or earlier appears at [archive]. Then just update the link to point there. I think, as of now, it is the most reliable way. Many thanks to Rednaxela for this effort.
Well, in this case, the drive works totally fine ;)
Just have a look at this commit: http://robowiki.net/w/index.php?title=RoboRumble/Participants/Melee&diff=52900&oldid=52879
Confirmed, it changes to Stable when all bots have full pairings.
If you find a bot that repeatedly crashes IMO remove it from the rumble and out it in the list below. If the author has a page make a comment and hopefully they will fix it.
Updating the participants list manually fails from time to time, network issue, misoperation, etc. But removing a bot from rumble takes O(1) time, but re-adding n bots takes O(mn) time where m is the total amount of participants.
However, it takes O(1) time in principle to re-add a bot, since no data is lost. Is that possible to tweak the literumble to support faster re-adding?
Re-adding bots actually only takes 2*m, once to add the bot, and once to add all of the pairings after all of the other bots have been added. Doing something different would require updating the rumble protocol, which I'd prefer not to touch.
Finally 1.9.3.1 is out, with Skilgannon's shorter bot list update time. Any plan to move on? And then we can use lambdas without transpilers as well.
According to git, there is a bug fix (fixing SittingDuck craches) either at 1.9.3.1 or right after release. Unfortunately git does not have the tag. May be we can ask fnl to make one more release or at least clarify this part.
But I am personally not so exited about java 10 coming forward. Debian stable has only java-9 in the distribution. It will be a pain to switch to java 10. I think so far we agreed that robocode clients should support java 8 not even java 9.
Do those lambda things are so critical for bots development? I know you apparently use them but do you really need them. Do they give you extra speed or robustness?
yeah, Java 9/10 support is poor even in today, and they didn’t add anything as useful as lambdas, so stick with java 8 is never a big problem.
However, with lambdas, Java 8 is a completely different language. Lambdas gain more optimizations (e.g. omitting unnecessary object allocation) comparing to anonymous class, and with lambdas you can avoid several unnecessary boxing and unboxing which is unreasonably slow, so yes, they give more speed; Lambdas remove several crufts in Java development prior to lambdas, resulting cleaner and more readable code, so yes they give robustness.
I personally relay on lambdas heavily for cleaner and more readible code, but with customized build scripts, I can easily transpile my code to java 7, so moving on is not much beneficial for me. But it’s not easy to setup such a build script, so for other people, especially new-comers, they will never have a chance to use lambdas in robocode development. They will even be scared and despaired by the fact that their bot can not run on roborumble, even with Java 8.
By not moving on to robocode 1.9.2.6+, we are wasting a large part of effort of moving to java 8. With the better bot list update mechanism, I think it’s definitely time to move on.
Since today, when uploading results, this message keeps appearing in my RoboRumble client:
java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults
Did anybody observe similar results?
Ah, this was caused by a bug due to a new check I added for dropping battles more than 24 hours old (to prevent old versions being added back by mistake). Can you try again and let me know if it is fixed for you now?
It seems that when uploading out-dated pairings, it is still receiving http 500, which cause those out-dated pairings to be doubled (another bug) and re-uploaded twice each time... And then failing with doubled time, which grows like crazy.
Will you change the response to something like "200, out-dated pairings dropped" or so to fix this? Thanks ;)
Well, that was the intention, but it seems that Python doesn't auto-convert datetimes to strings so my logging was crashing it. Should be fixed now!
More information:
java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985874,SERVER abc.Shadow 3.84i,24248,6434,28 maribo.mini.MiniQuester 0.1,16657,4100,5 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985874,SERVER abc.Shadow 3.84i,24248,6434,28 aaa.ScaledBot 0.01d,15278,3625,1 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985874,SERVER abc.Shadow 3.84i,24248,6434,28 mld.DustBunny 3.8,14411,3784,0 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985874,SERVER abc.Shadow 3.84i,24248,6434,28 cb.nano.Insomnia 1.0,11918,2981,1 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985875,SERVER abc.Shadow 3.84i,24248,6434,28 ayk.WallHugger 1.0,8941,2568,0 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985875,SERVER abc.Shadow 3.84i,24248,6434,28 yk.JahRoslav 1.1,8499,1922,0 java.io.IOException: Server returned HTTP response code: 500 for URL: http://literumble.appspot.com/UploadedResults Unable to upload results meleerumble,35,1000x1000,Xor,1505812985875,SERVER abc.Shadow 3.84i,24248,6434,28 rampancy.Durandal 2.1d,7268,1522,0
this is what I'm getting constantly (everytime it uploads).
It's been a while since we have updated the rumble client version, and the new version brings several important fixes. I'd really appreciate if someone set up a quick benchmark for a battle or two for each bot in the rumble, and then run it on old and new versions to make sure we don't have any regressions. Once this is done we can upgrade the client =)
As far as I know, Robocode 1.9.3.0 hasn't been officially released yet. The website and GitHub still name 1.9.2.6 as the latest version, and there is no 1.9.3.0 download. You can only get it by building from the latest git master. What I linked to was a draft of the new changelog.
I don't think any new releases will be made until poor Fnl finishes dealing with all of the bugs reports I piled onto him. What I have been doing for the past week is emptying my mental list of annoyances with Robocode onto its bugtracker.
So currently, it is still in development, and it's a bit too early to do regression testing with this new version.
What does need testing, however, is Robocode on Java 9. We already found CPU constant calculation and team JARs to be broken there, and doubtlessly there are more issues.
Robocode 1.9.3.0 has been released.
Great. As soon as we have a benchmark comparison making sure no subtle score changes have crept in or tons of bots are now broken I'm happy to change the LiteRumble over!
Recently, I noticed that more than half of the battles are dropped as queue is full — however, this won't happen even if I wait a few minutes. Seems that all the rumble clients are uploading battles periodically, and that upload is pretty concentrated — e.g. All four clients of mine upload ~200 battles within ~3 minutes, which makes the queue get full immediately. And If I take a look at literumble/statistics, I can see that there are 5 to 7 clients uploading within 2 minutes.
It generally takes a client about 15 min to finish 50 battles, but if we vary this to primes, the uploads will get evenly distributed, reducing the high concurrent which causes a lot of dropped battles.
Reducing NUMBATTLES would probably help here too. It would also reduce the delay which is the main cause of duplicated pairings for new bots being entered. Maybe a NUMBATTLES of 20 in the main rumble would be good enough to solve the client component of this.
However, I think one of the main causes of the full queue is the batch processing for Vote/NPP/KNNPBI, since the queue needs to paused while this is running. Because it is paused the projected processing time goes very high, and it stops accepting new uploads. I have an idea on how to tune this, it should help a bit.
However, even a NUMBATTLES of 3 can't prevent most of the battles from being dropped ;/
Seems that with 8 clients running the rumble at the same time, no attempt will help without stopping some clients.
Worth mention that I can notice dropped battles when there are 6 clients, also not frequently. Seems that with 2 more clients, the effectiveness dropped considerably?
Btw, one thing that's really interesting is that the duplicates of multiple versions can last hours. Seems that some clients are not checking participants list for hours.
Got it — maybe after the queue is paused for batch tasks and then resumed, it keeps near full as there are still much parings uploaded. Like some DoS, this decreases the ability to handle high concurrent (although the average pairings uploaded per minute is not very high, they came in during a short period of time, and get dropped)
Then I think increase the queue size a little after batch task (and then decrease to normal size slowly to make sure new uploads won't wait forever after some flood upload)
Or, we can handle uploads during pause separately — don't let them take place in normal queue, rather, store them in a separate queue (and cap it with normal uploads per minutes * pause time).
I was running 8 clients, that was probably causing it. Particularly melee clients cause a huge number of uploads for the amount of processing time required by the client.
I'll save my clients for when there are less others running =)
I've been experiencing constant "queue full" messages in the past 2 hours in MeleeRumble, with 3 melee clients + 3 rumble clients. This should really be happening this often?
I noticed that every time the queue is paused for batch tasks, not until I pause the clients for a few minus, did the massive queue full messages stop.
That may because when the queue size is near max size, the capacity of handling high concurrency decreases dramatically, although the average processing power doesn't decrease at all.
Use a separate queue when it is paused may help, imo.
Would you mind adding another column called Opponent APS in bot comparison? When sorting with opponent APS, it could be really useful to see the difference of two bots against bots with different APS range, as in the Diff Distribution graphic, but with more information, especially the bot name. This could also help us to create a good test bed ;)
I can take a look (although not this weekend, I'm away from home). However, what would you consider appropriate behavior on bots which had been removed from the rumble, but which are a shared pairing? The APS/diff image does this by just ignoring those pairs, but I don't think we want to do that here. Do I put a 0.0?
Can we assume that APS is relatively stable? Since we can click into the detail page to see the history APS even when that opponent is removed, can we simply put that value?
Opps this assumption breaks when comparing ancient bots ;( Then polluting the table must be a bad idea. However, why don't we use NaN or N/A instead of 0?
NaN sounds most appropriate. I don't want to have to fetch each bot object that is not in the rumble anymore to look up its last APS.
Done. I also added a link on the BotDetails page to find the bot on the wiki.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:LiteRumble/Adding opponent APS in bot comparison?/reply (5).
LiteRumble says OK. Queue full,XXX vs XXX discarded.
and it is discarding hundreds of battles :\
If the queue gets too long then the priority battles have a severe lag, so the rumble gets really inefficient. Max queue size is based on projected processing time.
Hi, after recent bot removal and restoring we have strange artifacts: asymmetrical pairing reports.
Have a look at Galzxy 01 stats and sample.Walls 1.0 stats. You can see that Galaxy 01 has 18 battles against Walls. Byt if you look at Walls stats there are no reports on these 18 battles with Galaxy 01. Galaxy 01 is simply missing in the list of Walls battles.
You just need to wait for Galzxy to get another battle, and it will be fixed again.
https://dl.dropboxusercontent.com/u/4066735/literumble-template.zip is not available now ;(
And archive.org doesn't has an archive of it ;( does anyone have a backup of it?
By the way, I'm really wondering how is LiteRumble working ;) I used to think the battles are all on the cloud, but then I discovered http://literumble.appspot.com/RumbleStats which shows a lot of contributors with familiar names ;) How can I set the battles to run on my computer and submit the results to LiteRumble? Didn't see any discussion about it.
That isn't needed anymore, the newer versions of Robocode are preconfigured to support Literumble.
Just download 1.9.2.5, edit robocode/roborumble/[roborumble/meleerumble/etc].txt to have your name, and you can run battles on your computer to contribute to the rankings. The website just displays the battles that users have uploaded in a nice way.
It should be tested a lot to be sure that there isn't any errors.
I have created my own LiteRumble instance running as a google app, as described in previous discussions. Now I want to know if it is possible to delete the battle history and the participated robots? I am experimenting with it since we want to have a roborumble event at our office and I want to delete my previous "testing" robots and matches and have a clean slate when we do the event.
You should be able to delete the data from the AppEngine web console. Otherwise you can simple make the clients upload to a different named rumble, and the old one can be for the demo/setup bots.
I have tried to remove the data from the datastore by selecting all database entries and delete them. But the data on the webpage is still there, so the data must be stored somewhere else. To create a new rumble seems like a annoying workaround :)
There may still be a copy in Memcache - if you clear Memcache and the datastore everything should be gone.
One thing I really missed from the old rumble was the LRP, but without ELO/Glicko we can't really do the whole straight-line fit any more. So, instead I have added a Score Distribution image on every bot's details page. The red is APS and the green is Survival (as seen in image the mouseover). The image is directly embedded in the HTML using data URIs, so if you are using IE, 8 and later only, otherwise pretty much everything supports it. I'm also planning to add this to the BotCompare page so you can analyse differences in score compared to opponent score for both APS and survival.
Ahhh, neat stuff. That's very nifty with directly embedding the image data there. For some reason the image is displaying very tiny for me though under Firefox 20.0. It gets scaled to the box around it properly under Chromium, but not Firefox.
EDIT: Nevermind... the styles.css file was being cached and that was the problem. A ctrl-r fixed it.
Ah yeah, the styles.css was changed so you need to do a hard-reload.
I've now added the KNNPBI to the bot-details Scores Distribution, and the bot-compare has a Diff Distribution.
There is something fishy with a chart in the right part close to the end. If you look at above CunobelinDC score distribution you would see that there is no corresponding red points for stronger opponents, while blue and grean are there. This is quite common theme for other bots as well.
Also have a look at this EvBot score distribution you would see the problem with normalizing, i.e. about 1/4 of the space in the right part of the chart has no points. Which is non optimal use of the chart space.
Is it still showing the problem? I don't see anything wrong right now. I had some issues with (I suspect) bad bytecode and versioning, but that should be fixed now.
As for the EvBot chart, that is because in meleerumble nobody gets higher than ~75%, so the top 25% is empty. Although I guess I could normalise to the top score, I'd rather have the charts consistent as better bots are released.
Aha, I see now why melee charts were somewhat off.
But I insist that I do not see red points for X>95% for CunobelinDC. Look at 5 the rightmost green points, I cannot locate red (APS) or blue for the same X values. It might be aliasing problem or may be points are just on top of each other.
Green is survival, and so the X value is the average survival score of the enemy bot. The red and blue use enemy APS as the X value, not survival, and since survival scores are higher the green dots go further to the right.
I've actually thought about changing the X axis to just be enemy APS to make it easier to interpret. Or ordering the X-axis by rank instead of using APS values.
I've changed it so they all use APS on the X axis, so it should be clearer now.
Does anyone have some advice for starting up a custom and/or private LiteRumble? I've got a new batch of programming students that I'm leading through Robocode and I'd love to run a custom bracket with just my kids in it as I've done in years past.
Sure, it's easy enough.
- Create your own app on Google AppEngine
- Download and extract the code from bitbucket
- Change the app name in app.yaml to the name of the app you created
- Download and install the Google AppEngine python SDK
- Run the following in the code directory:
appcfg.py update . && appcfg.py update batchratings.yaml
- This should give you an empty LiteRumble instance running on your app
Once you have a copy of LiteRumble running, all you need to do is modify the rumble client in roborumble.txt
to point to your new server for uploads. You also need a new participants list, which you can host on appengine too if you don't mind continually re-deploying, or you can make a wiki page somewhere. The client just parses everything between the two <pre> tags.
Have fun!
Excellent. I can just host participants on a Dropbox text file. Thanks for the info!
By the way, a favorite thing I do when introducing my kids to Robocode is to have a pair of them (driver and gunner) pilot sample.Interactive at a moderate simulation speed against some sample bots until they get used to it. Then they face DrussGT. Thought you'd want to know that you've caused some laughter and groans of frustration from some prospective high school coders!
Brilliant. I've always found the sample.Interactive very difficult to control, I don't think I'd stand a chance against DrussGT =) I bet if I set the bullet colour to something more similar to the background it would make it even harder for interactive users >:-D
That's always the kicker is that they have a very very hard time adapting to a top of the line bot like DrussGT or Diamond. I've had students say it's like the bot is reading their mind. Then I drop the bomb that the bot can't see bullets, while the students can. It's a great and impactful "Math is POWERFUL" moment!
Of course, set the sim speed low enough and get a patient non-wasteful gunner, and they will trash DrussGT because they can dance juuust aside of each bullet. But as long as I set the sim speed such to keep them on their toes, it's a rough but educational ride. Fun for spectators too!
I have some ideas about dealing with interactive users - closer range, not letting energy levels get below the enemy, varying colours of dark blue and grey bullets - perhaps that should be something I work on next. I've neglected Robocode and I've been working on more pure ML/AI problems instead, but this is something more in the behavioural side which AFAIK hasn't been done yet.
The sample bot Interactive is hard to control. For 1v1, all you would really have to change in response to what you see is orbit direction, distancing, current aiming GF, and bulletpower/when to fire. Everything else could be automatic 99+% of the time.
Would anyone be interested in a SuperInteractive wiki collaboration? Perhaps a challenge for driving it against DrussGT?
I was thinking of a fairly simple "SuperInteractive" which does regular wave-surfing, but also allows you to click on enemy bullets, which it will then dodge. Targeting, I feel, would be stronger without any human intervention.
First page |
Previous page |
Next page |
Last page |