View source for User talk:Jdev/Questions
- RhinoScriptEngine - it's fair play or not?
I find out today, that there're embed script engine may be used, to reduce codesize. Example:
package lxx.test.js; import robocode.AdvancedRobot; import com.sun.script.javascript.RhinoScriptEngine; import javax.script.ScriptException; import javax.script.ScriptContext; import javax.script.SimpleScriptContext; /**sure * User: jdev * Date: 11.11.2009 */ public class JSTest extends AdvancedRobot { public void run() { RhinoScriptEngine rse = new RhinoScriptEngine(); try { ScriptContext ctx = new SimpleScriptContext(); rse.eval("var a = 100;", ctx); Double i = (Double) ctx.getAttribute("a"); ahead(i); } catch (ScriptException e) { e.printStackTrace(); } } }
so i can rewrite most part of my code on js and only call it it nanobot. but i don't sure, that it is fair. What do you think about it? --Jdev 16:37, 11 November 2009 (UTC)
I think it was generally agreed that using interpreted languages to bypass the codesize utility isn't exactly fair, so although you can put your bot in the rumble to see where it would score, it would be better if you didn't leave it there, because based on the amount of code you have, it probably isn't a nanobot anyways. Look at White Whale and Talk:White Whale for more discussion on this =) It's quite cool that there is an included script engine in Java, I didn't even know about it =) --Skilgannon 17:19, 11 November 2009 (UTC)
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Missing bots request | 2 | 14:44, 7 January 2013 |
What avg bullet dmg of robot in battle against himself may mean? | 12 | 01:23, 22 June 2012 |
Multiply wave suffering | 6 | 08:53, 23 November 2011 |
Rumble Kings | 2 | 06:40, 17 October 2011 |
Test bed with stable results | 10 | 04:02, 28 September 2011 |
Can anyone share with me currently missing bots from rr: Download bot agrach.Dalek 1.0 (13/969)... Failed: java.io.FileNotFoundException: http://www.stud.fit.vutbr.cz/~xhajic01/agrach.Dalek_1.0.jar
Url: http://www.stud.fit.vutbr.cz/~xhajic01/agrach.Dalek_1.0.jar
Download bot agrach.MicroDalek 1.0 (14/969)... Failed: java.io.FileNotFoundException: http://www.stud.fit.vutbr.cz/~xhajic01/agrach.MicroDalek_1.0.jar
Url: http://www.stud.fit.vutbr.cz/~xhajic01/agrach.MicroDalek_1.0.jar
Download bot agrach.RobotSlayer 1.0 (15/969)... Failed: java.io.FileNotFoundException: http://www.stud.fit.vutbr.cz/~xhajic01/agrach.RobotSlayer_1.0.jar
Url: http://www.stud.fit.vutbr.cz/~xhajic01/agrach.RobotSlayer_1.0.jar
Download bot com.timothyveletta.FuzzyBot 1.1 (161/969)... Failed: java.io.FileNotFoundException: https://dl.dropbox.com/u/4735351/com.timothyveletta.FuzzyBot1.1.jar
Url: https://dl.dropbox.com/u/4735351/com.timothyveletta.FuzzyBot1.1.jar
Download bot davidalves.Firebird 0.25 (201/969)... Failed: java.io.FileNotFoundException: http://davidalves.net/robocode/robots/davidalves.Firebird_0.25.jar
Url: http://davidalves.net/robocode/robots/davidalves.Firebird_0.25.jar
Download bot davidalves.Phoenix 1.02 (202/969)... Failed: java.io.FileNotFoundException: http://davidalves.net/robocode/robots/davidalves.Phoenix_1.02.jar
Url: http://davidalves.net/robocode/robots/davidalves.Phoenix_1.02.jar
Download bot davidalves.PhoenixOS 1.1 (203/969)... Failed: java.io.FileNotFoundException: http://davidalves.net/robocode/robots/davidalves.PhoenixOS_1.1.jar
Url: http://davidalves.net/robocode/robots/davidalves.PhoenixOS_1.1.jar
Download bot froh.micro.Aversari 0.1 (304/969)... Failed: java.io.FileNotFoundException: http://dl.dropbox.com/u/60122033/froh.micro.Aversari_0.1.jar
Url: http://dl.dropbox.com/u/60122033/froh.micro.Aversari_0.1.jar
Download bot fruits.NanoStrawbery 1.3 (308/969)... Failed: java.io.IOException: Server returned HTTP response code: 403 for URL: https://doc-0c-98-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/t3rnhku5j9pqslcdl1920pfu2jgoeane/1356804000000/04024542935250065010/*/0B0bFe9FCJ2HSbjNJb2FkVnExYzg?e=download
Url: https://doc-0c-98-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/t3rnhku5j9pqslcdl1920pfu2jgoeane/1356804000000/04024542935250065010/*/0B0bFe9FCJ2HSbjNJb2FkVnExYzg?e=download
Download bot rampancy.Durandal 2.2d (684/969)... Failed: java.io.IOException: Server returned HTTP response code: 403 for URL: http://stanford.edu/~mchunlum/robocode/rampancy.Durandal_2.2d.jar
Url: http://stanford.edu/~mchunlum/robocode/rampancy.Durandal_2.2d.jar
Download bot rampancy.micro.Epiphron 1.0 (685/969)... Failed: java.io.IOException: Server returned HTTP response code: 403 for URL: http://stanford.edu/~mchunlum/robocode/rampancy.micro.Epiphron_1.0.jar
Url: http://stanford.edu/~mchunlum/robocode/rampancy.micro.Epiphron_1.0.jar
Download bot serenity.serenityFire 1.29 (753/969)... Failed: java.io.FileNotFoundException: http://www.robocoderepository.com/BotFiles/3071/serenity.serenityFire_1.29.jarS
Url: http://www.robocoderepository.com/BotFiles/3071/serenity.serenityFire_1.29.jarS
Download bot testantiswapgun.AntiSwap 1.0 (870/969)... Failed: java.io.FileNotFoundException: http://www.robocode.ilbello.com/asd.AntiSwap_1.0.jar
Url: http://www.robocode.ilbello.com/asd.AntiSwap_1.0.jar
Download bot uccc.Dorito 1.12 (899/969)... Failed: java.io.FileNotFoundException: http://devfluid.com/csc_w/images/e/e9/Uccc.Dorito_1.12.jar
Url: http://www.devfluid.com/csc_w/images/e/e9/Uccc.Dorito_1.12.jar
Download bot uccc.MilkyWay 1.01 (900/969)... Failed: java.io.FileNotFoundException: http://devfluid.com/csc_w/images/a/a6/Uccc.MilkyWay_1.01.jar
Url: http://www.devfluid.com/csc_w/images/a/a6/Uccc.MilkyWay_1.01.jar
Download bot uccc.RingDing 1.12 (901/969)... Failed: java.io.FileNotFoundException: http://devfluid.com/csc_w/images/5/5f/Uccc.RingDing_1.12.jar
Url: http://www.devfluid.com/csc_w/images/5/5f/Uccc.RingDing_1.12.jar
Download bot uccc.Scrapple 1.0 (902/969)... Failed: java.io.FileNotFoundException: http://devfluid.com/csc_w/images/7/7a/Uccc.Scrapple_1.0.jar
Url: http://www.devfluid.com/csc_w/images/7/7a/Uccc.Scrapple_1.0.jar
Download bot yarghard.Y101 1.0 (952/969)... Failed: java.io.IOException: Server returned HTTP response code: 403 for URL: http://sliwa.ws/RoboCode/yarghard.Y101_1.0.jar
Url: http://sliwa.ws/RoboCode/yarghard.Y101_1.0.jar
Download bot zzx.Serunyr 2.0.2 (969/969)... Exists!
?
Try the robot database at RoboRumble/Starting With RoboRumble.
If they are incomplete, I can upload another zip pack later.
Thanks, i skip latest link:)
I do fast and rough research and find out, that in battles against themself Tomcat gets ~1200 bullet dmg, DrussGT gets ~1100 bullet dmg and Diamond gets 1000 bullet dmg. Can it mean, that Tomcat has better gun relative to his movement, Diamond has better movement relative to his gun and Druss is well balanced? What do think, guys? And related question: how you decide what to improve next or what is weakness of yours robot?
My first thought would be to look at distancing. If Diamond keeps a larger average distance than DrussGT, and DrussGT more than Tomcat, the hit percentage would naturally be lower. Also, Tomcat might just have the most aggressive bullet power strategy - higher bullet powers is always a trade off of bullet damage vs survival.
Figuring out what to improve is a much tougher question. :-) Recently I've been refactoring Diamond and adding lots of tests, which has given me lots of little bugs to fix and ideas of what might work better. I think just going through your code and writing tests is a good way to get ideas, because you start really getting a feel for how things work. But other than that, I don't have any great advice.
The big ideas kind of come out of the blue, immediately kick ass in tests, and then you can just polish them and release. But usually it's not like that. =) One general thing is I try to think of how to make things more accurate, like data collection or how I'm normalizing my values. I'm really trying to avoid the "tiny tweak and test for hours" cycle recently and focus on significant behavior changes, which I find a lot more fun and which has a better chance of being a big improvement.
Distance and bullets power, sure, thanks:) Looks like i still stuck with my gun...
About tests - i did start work on ConceptA because Tomcat now has high coupling (cohesion?), so test it is difficult task:) But any way thanks for advice:)
I agree, that big things is better, but i think, that APS gap between Tomcat and Diamond and DrussGT just in "tiny tweak", which require hours of tests. But tests is problem for me with my low-end first generation mobile i3 core:)
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
I do not know much about hardware, but i think, that my i3 with 1.3 GH is much more worse:)
Tomcat uses gun heat waves. It's not completly clear for me how precise intersection can help in surfing. Actually Tomcat starts to surf next wave, when closest wave in distance of two ticks to Tomcats center. But thanks, i will add it to todo list:) I'm not sure, but it seems, that Tomcat was released with gun heat waves, and some tuning of it give him some points.
Thanks, for Diamond's gun but I'd rather spend time to Tomcat development - i have the super secret big idea which must hit:) I hope, it must hit:)
I've always found it optimal to surf until the wave is one tick from center. Effectively, this is when it passes your center, since it will move and then check for collisions before you can move again. While there still could be firing angles that would hit you after this, most of them would have already hit you, so it makes sense.
I guess you're just using the firing angle on the last tick you surf that wave? Of all the firing angles that would hit your bot for that movement option, using precise intersection will let you use the angle at the center of that range as the input to your danger calculation. (Or the whole range, depending on how your danger formula stuff works.) I was surprised by how much I gained by it. Pretty sure I argued against it having value for a while before I tried it. =))
I cannot understand yours point because of my english skill or simple misunderstanding. Currently Tomcat uses this algorithm (in roughly):
- predict movement in CW and CCW directions until closest's wave travel time > 0
- for each point calculate it's danger as: (amount of possible bullets within +/-botWidthInRadians * 0.75) * (high danger) + (amount of possible bullets within +/-botWidthInRadians * 2.25) * (low danger danger) (distance in radians between point and bullet bearing offset is also taken in account, botWidthInRadians are calculated exactly)
- move in direction with safet position or if distance to safest point <= stop distance, then go there directly (yes, i know that it's my not good bicycle and it's another reason of birth of ConceptA:) )
Where i can add precise intersection? May be it can be applied only for true surfing? Are you know is DrussGT use it in movement?
Hi maybe this is interesting to you if you need more power for testing. Most of the big server companies (HP,IBM) have some sort of test/develop and whatnot programs where you can get an so called free "shell account". You can use these systems for non commercial use. Not sure what the current restrictions are but you should get an SSH port where you can up/download your stuff. I guess you could easily install robocode on such system - run your test and download the results. The CPU power should be way above everything what you can get at home :)
I had one of these years ago ... not sure if they changed some conditions: IBM Shell Account
Thanks, i will see on it
Oh that's interesting. I actually worked for IBM until last year (on System z, even) and didn't know about that. If my experience in our dev environments is any indication, the horsepower might not be as high as you'd think. =) My other concern is that the CPU is shared with all the other virtual machines, so you couldn't rely on a constant CPU allocation, so things would get weird with Robocode's CPU constant. Either lots of skipped turns or SlowBots given infinite time and taking forever.
Some folks have played with App Engine and EC2: Talk:RoboRumble#Client_on_Google_Apps_Engine. Seems like it can't be long before these options are cheaper/easier/better than buying your own big box for things like Robocode.
Hmm normally those shell accounts are for people who want to try if the hardware is appropriate for there software and as far as i know you get a fixed CPU power and most of them should work on a real hardware. It is years ago since i had to do with these stuff so it might be have changed. And you might be right with the horsepower :) i have no idea what todays home cpus can do - it just came into my mind by reading the posts of you both. I'm happy with my 4 year old Macbook :)
Let's speak, when you will develop mega bot with multiply wave surfing, 30 R-Trees with 500-5000 entries, 7 kD-Trees with 50000 entries, bullet shielding and million other computations:) Actually, i really sometimes think about spending 1000-1500$ on new computer only due to the robocode:)
I have implemented multiply wave surfing, but my local tests shows only +0.17 APS.
Then i have search TOP-20 bot's version history to find out another's gains, but there're only 2 bot's has relatively accurate records
Wintermute
0.2: APS:83.19 PL:1366 - 699 pairings Added HOT and LT to the weighting schemes (acts as pre-loaded HOT and LT shots) Changed the weighting system to give the most accurate scheme all the score, the others 0 (with a rolling average, so it evens out) Now surfs the second wave, and differently I believe than to other TrueSurfing algorithms: at each tick along the second wave I check what the danger would be if I decelerated at that point, and then take the minimum danger of all those decelerations, as well as continuing, as the actual danger. Also, the second wave is weighted equally to the first. (This may be changed at a later date.) Takes ram damage into account for enemy energy drop 0.1: APS:82.92 PL:1350 - 698 pairings
Garm
V. 0.6i: rank: 33 PL-rank: 24 rating: 1944.58 date: 04.01.2007 fixed movement bug (introduced in V. 0.6f) now surfs the first two waves moves some degrees away form the enemy instead of staying perpendicular V. 0.6h: rank: 47 PL-rank: 37 rating: 1891.7 date: 24.12.2006
So question is "Is +0.17 APS gain corresponds to multiply WS or there're still bugs or room for improvements in my implementation"?
For GresSuffurd this was a long time ago, but it has reasonable reliable stats between 0.1.8 and 0.2.1. Difference is 18 ELO-points which is almost 1 full APS-point. Note that at that time the only surfingattribute was lateral velocity, and no-one had heard from precise prediction.
0.2.1 (20070117) member of The2000Club gun: GV 0.2.2 move: WS 0.1.5 Rating: 2000 (21nd), PL: 448-40 (37th) bugfix weighting of waves 0.2.0 (20070114) gun: GV 0.2.2 move: WS 0.1.4 Rating: 1992 (22nd), PL: 444-43 (38th) removed nearwall segmentation also evaluate second wave 0.1.9 (20070102) gun: GV 0.2.2 move: WS 0.1.3 Rating: 1980 (23nd), PL: 439-43 (40th) segment movement also on nearwall (3 segments) 0.1.8 (20061212) gun: GV 0.2.2 move: WS 0.1.2 Rating: 1982 (22nd), PL: 436-44 (41st)
Chalk also gained a lot of points in v2.5 (oldwiki:Chalk/VersionHistory) - I think about 50 ELO points, based on some chatter on Chalk/Archived Talk. There were multiple changes, but he attributed it mostly to changing his multiple wave surfing from a very basic approach to more like what I do in my bots.
This has come up a few times lately, so maybe I should release a one-wave version of Diamond or Dookious to the rumble and see where it ends up...
I think I only gained maybe a quarter APS point from my multi-wave surfing. I'm planning on going back and working on it more in the future sometime -- first, to just verify what difference it currently makes, and then to see if I can improve it. I'm also curious how much of a difference multi-wave surfing makes in true surfing vs go-to surfing. My drive is of the go-to variety, though it may change it's mind once or twice per wave depending on the changing battlefield conditions.
I think that using bullet shadows is the cause that second wave evaluation has less impact than it used to have. The infuence of bullet shadows is that big that second wave differences are just marginal.
But i think, that Diamond and DrussGT get ~1 APS from multiply WaveSurfing and ~1 APS from Bullet Shadows. I want to get 2 APS from this things too:)
Thanks for all for response
Veterans, i think it will be interesting for community to know a history of rumble throne. As i know, it was something like this:
- SandboxDT - ??? - ???
- Shadow - ??? - March 2004
- RaikoMX - March 2004 - July 2004
- ??? Shadow - July 2004 - November 2004
- Ascendant - November 2004 - April 2006
- Dookious - April 2006 - ??? 2008
- DrussGT - ??? 2008 - now days
Correct me, please, if there is mistakes and add dates, if you know. Then, i think, this list may be moved to separate page
O, thank you, i always forgot about old wiki:)
Hi to all. Who which test beds use to test robots? I cannot find out test bed which will give stable results (+/- 0.05 APS). Now i use every 5th bot roborumble with 5 season vs each and results is in interval -/+ 0.3 APS.
Well, I suspect it's total number of battles that matters most. A recent version of Diamond dropped 0.12 APS after 2000 battles in the rumble, but perhaps that was a fluke due to someone's client skipping turns. My APS test beds are 100 bots and I run 5 seasons (500 battles) to get an idea, 10 seasons I consider accurate enough even for pretty minor tweaks, or 20 seasons if I want to feel very confident. I'm not sure what the statistical accuracy is, but +/- 0.05 for 20 seasons (2000 battles) would be about my guess. That's pretty much my experience with the accuracy of RoboRumble results after 2k battles, too.
I've been thinking about adding to RoboResearch the feature to calculate the confidence interval of the overall result, assuming normal distributions. It should be pretty easy.
I run 30 seasons of 35 rounds against 40 bots (taking approx 18 hours). No idea if it is stable though, it is my first testbed. I do know my testbed does not reflect the rumble correctly, because 0.3.2 and 0.3.5 score on par, and 0.3.7 approx 1.5 APS lower.
GrubbmGait, you're a hero:) for me 4 hours for test it is limit and i want to get test which give me results in 2 hours. And may be you will try to use Distributed_Robocode - now i can share with you my home netbook (i3 1.3 x 2) and on this week i plan to setup old duron 1.6 as dedicated robocode server. So, i guess, your's test will take at maximum 6 hours (but it's strong depends from which robots is in yours test bed)
One quick little thought, is theoretically, it should be possible to use PCA to come up with the most significant axes of the roborumble, and rank robots by how well they correlate with each axis. Then, you also rank robots by their standard deviation. You then pick robots which simultaneously have a low standard deviation, and highest correlation with the axes that the PCA determined. Then you can use some linear regression to determine the weight to give each of the robots selected.
That I think, would probably be a good way to find a testbed which simultaneously represents the rumble well and has low noise... Hmm... maybe I should make a patch to Voidious' testbed maker that uses the algorithm I describe in the paragraph above...
I found out good test bed - 7 seasons against every second bot from roborumble (~2900 battles). It produces repeating results within +/- 0.05 APS and error against real RR within +/- 0.1 APS
I'd think 1 battle against each bot from the rumble would probably give better results... maybe not as reproducible but closer correlated to the rumble.
No Skilgannon, even 3 battles against every bot is worse in terms of stability and correlation with RR
I totally agree with using very large test beds (lately ~100 bots for me), but I think you're wasting some CPU cycles testing against bots that are unlikely to be affected by your changes. Bots you get 98+ against probably won't be affected unless you're testing changes to your core surfing algorithm or something. Most changes to surf stats are not going to affect HOT bots at all, and flattener tweaks won't affect any bots that have no chance of triggering your flattener.
My last release show, that you never know what will be affected:) So better if test will execute 7 hours, against 6.5, but give confidence in results:)