User talk:Bwbaugh
RoboRumble
Question: Number of Client Instances
If I have computers that are multicore (some dual some quad), is there a way to safely run multiple instances of the RoboRumble client without causing skipped turns? Would I need to set the affinity of each instance to separate CPUs? I'm a bit new to this, and my parallel computing class doesn't start until the Fall. =) --bwbaugh 10:05, 8 June 2011 (UTC)
Welcome to the wiki, glad you are contributing to the rumble =) If you're worried about causing skipped turns first open up robocode and reset the CPU constant, then close robocode and open up the config/robocode.properties file with notepad. Manually increase the CPU constant there to something you feel would account for anything like cache thrashing, possible external logins etc (I'd say just double it...) and save the file. Make sure you run each instance of roborumble from a different install, otherwise there will be issues with all of them saving to or uploading from the same file concurrently. Hope this helps! --Skilgannon 10:28, 8 June 2011 (UTC)
Personally, in addition to boosting the CPU constant, I prefer to run one fewer robocode instance than the number of CPUs on the system. This is because robocode seems to use some processor power outside it's handling it a separate thread than the robots run in (perhaps the Java GC? Perhaps part of the robocode engine itself? I'm not sure). About setting affinity, I doubt it would make much difference. Robocode is already alternating between running each robot, which I'd expect to limit how long-lived things are in the cpu cache, which I'd expect to mean that affinity probably wouldn't help much. That's just a guess though, it's not like I've tested. --Rednaxela 13:57, 8 June 2011 (UTC)
Agree with everything these guys said. About the affinity, I think this may depend on OS. I know on Mac, and I think on Linux, it will just work running one instance per core. I recall some people on Windows (XP days?) having to set affinity. FYI, RoboResearch is a great tool to leverage multiple cores for development. And yes, welcome and great to see you already contributing to the rumble. :-) Good luck! --Voidious 14:33, 8 June 2011 (UTC)
I personally used to run two RoboRumble clients (1v1 and melee) on Dual Core machine (Core 2 Duo P8400 2.26GHz) with about 1.1x increased cpu constants without trouble (I've checked the result on the server). I have also run two RoboRumble clients (1v1 and melee) on my Core 2 Quad 2.4GHz server without increased cpu constants without any trouble too. (but my server do esnot do anything other than consuming power when not explicitly used, it host my class' online judge system) --Nat Pavasant 14:40, 8 June 2011 (UTC)
- [View source↑]
- [History↑]
You cannot post new threads to this discussion page because it has been protected from new threads, or you do not currently have permission to edit.
Contents
Thread title | Replies | Last modified |
---|---|---|
Battle Farm Active (for the next 12.5 hours) | 6 | 22:15, 15 January 2012 |
Melee Battle Farm Active (for the next 16 hours) | 3 | 01:28, 12 January 2012 |
Lots of 'Could not load robot' Errors | 2 | 16:55, 3 November 2011 |
For those that might want to be active with submissions to the roborumble, I'm currently running 1v1 battles for the next 12.5 hours (until 6:00 AM CST 1/11/2012). The current rate appears to be about 840 battles per minute, so you'll likely be able to make a couple of submissions. (Note it may a little time for newly added bots to be downloaded).
Could you contribute a bit of that huge CPU power to meleerumble? It is the slowest league to stabilize, taking about 2 days with me alone contributing.
Melee active till 6:00 AM Mon 1/16/2012. ITERATE = ON, so expect max 2 hr delay, and due to one of the bots being unavailable for download (via the participant's URL) they are not participating.
Over 60 battles per pairing in melee. O.o´ And I uploaded the bot yesterday. :)
...and 1v1 is like 7-8 battles per pairing. Maybe you could split the farm half 1v1/half melee?
Currently running melee battles for the next 16 hours (until 6:00 AM CST 1/12/2012), unless another project requires diversion. The current rate appears to be about 36 battles per minute (2,195 uploads per minute). This also appears to be the close to the limit that the roborumble server can handle (the upload speeds are just starting to slow at this point), even though only using 30% capacity.
How often is the participant list updated for melee battles when ITERATE = TRUE ?
I added a bot the the melee participant list, waited, but it didn't get picked up. So, I restarted a single client, and thus far I haven't seen the other clients pick it up still.
Do I have to turn ITERATE off, and instead use a BAT-file loop? Any thoughts would be appreciated!
Just needed to wait a little bit longer ... the other clients are picking up the new participant now!
Participants list is updated every 2 hours with ITERATE=TRUE as I remember. But I am using a bat-file loop in all leagues.
Is anyone else getting a lot of these errors?
Could not load robot: robot.name.here Skipping battle because can't load robots: robot.name.here, robot.name.here
I looked at the RoboRumble battle history for the bots, and even a few days ago I have battles uploaded for these bots, so I don't really know what the issue is. It could be that from trying to keep the ZIP distribution up to date, somehow the bots got corrupted, but I don't know about that.
Has anyone else had this or a similar issue before?
The bots I'm getting errors on include: pkdeken.Paladin: java.lang.ClassNotFoundException ... bayen.UbaRamLT aetos.AetosFirstBot lancel.Lynx bayen.nut.Squirrel yk.JahRoslav ... the list goes on and on, roughly 70% of the attempted battles.
--bwbaugh 05:18, 3 November 2011 (UTC)
Have you moved your roborumble directory or your robots directory? In that case you have to delete the '.data' dir and robot.database file from the robots directory.
As I was falling asleep I figured it was that (I ran it once outside of the normal directory before repackaging).
Thanks! Should be fixed for the next execution cycle.