View source for User talk:Beaming

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
Smart bots competition613:20, 16 January 2023
Robocode on HiDPI displays212:32, 16 January 2023
Bots starting position randomizing is not working1014:50, 31 July 2018
How to keep roborumble running forever202:04, 4 April 2018
Melee client corrupted317:24, 31 October 2017
cpuConstant921:38, 17 October 2017
Awful006:59, 8 September 2017
Fire power 2.95 bug303:17, 4 December 2015
Head on gun in melee1204:48, 12 November 2015
Article616:42, 18 September 2014
Strange regression in robocode 1.9.2 vs 1.8.2104:25, 2 June 2014
Gertjan1996 needs help021:54, 21 May 2014
Bad bots in roborumble004:15, 25 February 2014
Rumble Client805:03, 10 February 2014
What is wrong with meleerumble?916:40, 28 November 2013
what to do with about printing to much?1410:17, 21 November 2013
EvBot705:51, 16 November 2013

Smart bots competition

As we know, there is hard (bad badly controlled) maximum processing time per turn allowed per bot imposed by the robocode engine. This pushes all of us to do the best in the allocated time limit. But as result some of potentially better but slower strategies do not show up at the top of the rating.

I am looking for a way to set/control allowed calculation time per in the rumble clients, in a hope to find the best among slow and wise bots. So we can start new rumble codenamed "wise slopokes". Additionally, it may spark new wave of interest among the kings of the rating authors.

What do you all think about this idea?

Beaming (talk)17:13, 11 October 2014

I like the idea of more processing time in order to attempt more complex stats, but to be honest I've tried some pretty complex stuff (including Spectral Clustering) for KNN on recorded data and nothing has beat simple KNN with a square kernel and weighted on sample distance. Not to mention that a lot of these algorithms aren't just a constant 50x slower or whatever, they essentially become unsolvable at sizes above ~500 data points, scaling quadratically or worse.

A problem with unlimited processing time is that it becomes possible to simply keep a copy of other bots and simulate them to determine where they will shoot/move.

Does anybody have any specific techniques that they would use if there was more processing time available?

Skilgannon (talk)18:03, 13 October 2014

I personally was aiming to a better movement algorithm. I.e. one can do better than 1 or 2 wave surfing. That would be super critical in melee, where you can track/react/predict more than just the closest bot.

Idea of enemy simulator is also attractive. I personally do not like clustering algorithms, I would rather prefer emulate and distill a subset of physically possible and optimal tracks for a current situation.

A clustering algorithm probably would seize to work once you apply it in melee, since you have to account for a neighboring bot subset as an input.I would think this would drastically increase the problem dimensions.

Displacement vectors of Voidious would be the closest to real emulation but it is still just an approximation.

Beaming (talk)18:23, 13 October 2014

I also have a crazy idea of a fine control of the motion, i.e. something more complicated than a simple go to a point which does not change for a several ticks. I want to have a path with smaller/different rotation angles and potentially speed which does not accelerate to max or min state. This produces a lot of possibilities for the paths and takes a lot of time to properly evaluate the danger. Of course it is overkill since such paths are often only 1 pixel apart.

I think DrussGT has a trick where at every several ticks the bot rotates to left and right, so the bot moves in wave like path. This confuses a linear targeting quite a lot.

Though, my goal would be to fit in between coming bullets with a fine control of a motion and also meet a wave with bot moving.

Beaming (talk)00:23, 14 October 2014
 

I had some movement algorithm things in the works, and they would have been more straightforward to implement with more processing time sure, but half the fun so to speak was coming up with the right optimization strategies to make it very fast.

Honestly, overall I don't feel like higher per-tick processing time would be too helpful at this point. My reasoning is, high level Robocode tends to require a very large amount of automated testing to know if some tuning was was actually an improvement or not. If the processing time limit was increased, it would also increase how long that testing takes.

Basically, I think the potential gains from slower algorithms are outweighed by the gains of faster iterations during tuning/development.

If anything I'd be more interested in a "FastBots" league, except I think the engine's control over the processing limit is too unreliable/unstable for that to be reasonable.

Rednaxela (talk)18:36, 13 October 2014
 

Well I wouldn't mind implementing full and proper n-wave surfing, right now Nene just does 2 wave surfing in simple way. Full n-wave would be very time consuming to calculate. But I am unsure of any major improvement it would bring to movement.

Chase18:58, 13 October 2014
 

Slow but functional class of algorithms would be Monte Carlo methods. The more time you give them, the stronger they become.

MN (talk)15:24, 18 October 2014
 

Robocode on HiDPI displays

Hi I am trying to use my HiDPI laptop for robocode. But everything is super small. Is there a trick to scale everything up within Java? I am running it all on Linux.

I tried '-Dsun.java2d.uiScale=2' trick but it seems to have no effect on openjdk.

Beaming (talk)15:01, 22 March 2019

You are not the first to experience this problem.

Robocode 1.9.3.0 and its predecessors produced visual glitches with DPI scaling enabled (Bug-394). Fnl's workaround for this in 1.9.3.1 was to have Robocode disable DPI scaling entirely. Unfortunately, this causes the UI to appear very small.

The ideal solution, of course, would be to have Robocode support HiDPI properly. But that would mean changing a significant amount of UI code to take DPI into account, as well as modifying the graphics system so that the battle view can be scaled up arbitrarily high without losing quality. That would take a lot of work.

MultiplyByZer0 (talk)21:19, 22 March 2019

Only a few lines of code need to be changed. The glitch is caused by a bug, otherwise Java has already had perfect HiDPI support. See PR #62.

Xor (talk)12:32, 16 January 2023
 
 

Bots starting position randomizing is not working

Guys,

Can you double check it for me. I just notice that 1on1 rounds seem to have the same bot starting location. I.e. in 35 rounds battle, every round start (at least bot positions) are identical.

I see it in robocode-1.9.2.5

Beaming (talk)21:33, 17 October 2017

I don't know about RoboRumble but in RoboCode I see the randomizing works.

Cb (talk)22:01, 17 October 2017
 

Random here.

Rsalesc (talk)22:12, 17 October 2017
 

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:User talk:Beaming/Bots starting position randomizing is not working/reply (3).

I defenetly have one instance of GUI, where every round start seems to be the same. The other seems to be fine.

Looks like robocode's random number generator get stuck.

Beaming (talk)23:59, 17 October 2017
 

All random here also. Stupid question (but just to be safe): No fixed position set in the properties ?

GrubbmGait (talk)00:16, 18 October 2017

Bingo! It apparently was set somehow for one of the bots.

But I cannot even find how to setup it in the GUI. I spotted it in the config after your hint.

Looks like I was right, robocode opens some secret to people with more than 10 years of experience :)

Beaming (talk)00:27, 18 October 2017

I've looked into the newest robocode source code, but couldn't find where the robot start position settings are — except for the tests used by robocode itself. Could you still remember what config it was to fix start positions?


Edit: got it, it just hides behind some direct access instead of accessors.

the config is "robocode.battle.initialPositions".

Thanks for mentioning that feature anyway!

Xor (talk)14:35, 27 July 2018

It is good that you find it. I already forgot what it was. Strange thing I did not remember to set it up, so some bug triggered it back then.

Beaming (talk)15:51, 28 July 2018

However, it doesn't work when set in robocode.properties.

And from source code I know that this config is valid only in battle files.

And with a bug of robocode, once you open intro.battle, then the first bot's initial position will be fixed as long as another battle file is not opened. Neither "New Battle" nor "Restart" works.

So did yours appear in robocode.properties?

Xor (talk)14:29, 31 July 2018
 
 
 
 
 

How to keep roborumble running forever

Thanks for running roborumble for so so many years! I tried to do the same thing with my VPSs but roborumble client kept getting killed in a few days.

What's your settings to keep roborumble running for such a long time? Did you experience similar problems running roborumble?

Xor (talk)03:12, 3 April 2018

Funny thing that I don't do anything special. I just run the clients on my own computers. They all currently have Debian stable (stretch) linux distribution with openjdk-8 java. The robocode is installed manually in my home folder.

I run clients in the 'screen' sessions, so if I get disconnected the session survives. From time to time, I restart it manually, mostly for selfish reason of a quicker bot list upadate. But the current run is almost 100 days straight without interruption. My machines have 4 to 6 cores, so I just do normal office load (browsing, youtube, short term calculations) on them as well.

One more thing. I just checked that I client consumes about 4GB of memory (including virtual one) after that 100 days.

I would suspect that VPS shares you CPU with other virtual OSes and kills CPU consuming process forcefully.

For many years that I run robocode clients, I never had any problem with sporadic crashes. The restart happens only if my home looses power and UPS runs dry.

Beaming (talk)16:08, 3 April 2018

Thanks for your vital information! It seems that it is peak memory that kills my JVM in long run. killed by VPS may be another reason. I would try running the client with computers in my lab then ;)

Xor (talk)02:04, 4 April 2018
 
 

Melee client corrupted

Hello

I think one of your meleerumble clients is corrupted, since it is continuously re-running battles against old versions of Neuromancer and Firestarter. Normally a client should get new bots once every 2 hours, but it has been a few days since the Firestarter update and yours is still running the old versions. Could you investigate?

Thanks

Skilgannon (talk)10:49, 31 October 2017

Weird. The client was running battles with new version of bots but also attempted to upload ratings for the old ones. I cleared caches (files) and temporary (temp) dirs. Strangely enough I see the following at start up

Iteration number 0
Could not load properties file: ./roborumble/files/codesizemelee.txt
Downloading rating files ...
Downloading participants list ...
Downloading missing bots ...
Downloaded cb.fire.Firestarter 2.0d into ./robots/cb.fire.Firestarter_2.0d.jar
Removing old participants from server ...
Removing entry ... cb.fire.Firestarter_2.0e from meleerumble
OK. cb.fire.Firestarter 2.0e retired from meleerumble
Removing entry ... eem.EvBotNG_v11.0 from meleerumble
OK. eem.EvBotNG v11.0 retired from meleerumble

Note that eem.EvBotNG v11.0 is 3 version behind, cb.fire.Firestarter_2.0e is one version behind.

I cannot figure out where does it read these versions since I cleared caches. My only assumption it is coming from the literumble. Am I right?

Nevertheless, despite the reported removal at the end of the battle I see

Fighting battle 0 ... fire219.CatBot 1.0,stelo.Spread 0.3,gh.mini.Gruwel 0.9,ara.Shera 0.88,wee.Gem 1.8.4,davidalves.net.DuelistNanoMelee 1.01,pedersen.Moron 2.0,zyx.mega.YersiniaPestis 3.1,emp.Yngwie 1.11,zyx.nano.BacillusComma 1.0
RESULT = stelo.Spread 0.3 wins, gh.mini.Gruwel 0.9 is second.
Fighting battle 1 ... pedersen.Ugluk 1.1.1,florent.XSeries.X2 0.12,gimp.GimpBot 0.1,yk.JahRoslav 1.1,agrach.Dalek 1.0,lrem.Spectre 0.4.4,jcs.Seth 1.8,maribo.FollowFire 1,DTF.Kludgy 1.2b,franzor.Lizt 1.3.1
RESULT = pedersen.Ugluk 1.1.1 wins, DTF.Kludgy 1.2b is second.
Fighting battle 2 ... stelo.WangRobot 1.0,dsw.StaticD 1.0,kurios.DOSexe .9b,jk.melee.Neuromancer 6.9,amk.Punbot.Punbot 0.01,oog.melee.CapuletDroid 1.1,lessonz.robocode.Oz 0.5.0,cs.sheldor.Talon 1.1,eem.IWillFireNoBullet v2.8,aaa.ScaledBot 0.01d
RESULT = jk.melee.Neuromancer 6.9 wins, aaa.ScaledBot 0.01d is second.
...
...
Ignoring: cb.fire.Firestarter 2.0e,cb.fire.Firestarter 2.0d,SERVER
Ignoring: amk.Punbot.Punbot 0.01,cb.fire.Firestarter 2.0e,SERVER

Note that Firestarter was not even fighting the battles. Where is it coming from?

It all might be related to the sometimes observed appearance of two versions of the same bot in the rumble ratings which lingers for several hours.

Beaming (talk)15:15, 31 October 2017

Interesting, maybe it isn't your client uploading these bots then. Maybe it is User:Rsalesc? I should really add some more logging to the rumble so this kind of thing is more obvious.

Skilgannon (talk)17:12, 31 October 2017

Oh, ok. Just noticed I left the rumble client open in my university labs (rsalesc_dahia), and its connection is a bit sketchy. That explains a lot. I'll be there soon to shut it down.

Rsalesc (talk)17:24, 31 October 2017
 
 
 

cpuConstant

Hi all, I decided to work on my skipped turns and to watch my execution time. As the first step, I recalculated cpuConstant according to cpuManager code (I wish it would be available from within the robot) at the beginning of each round. Guess what. The number wildly fluctuate. By wildly, we are talkin more than FOUR times!!! On the short end of the spectrum cpuConstant is about 8mS and it can be as large as 35mS.

Any ideas what is going on? Everything is done with openjdk8. One difference from the original code: I measure time with System.nanoTime()

Beaming (talk)03:37, 17 October 2017

Yes with Turbo Boost, CPU speed is not constant ;) And I think every new CPU should have something like that.

Xor (talk)03:40, 17 October 2017

I would expect the cpuConstant go lower as turbo kicks in. But my first run usually gives lower number (faster speed), and consequent executions typically but not always are longer/slower.

In either case it is a problem. Recall recently discussed ThreadDeath issue.

Beaming (talk)03:46, 17 October 2017

Do you relay highly on GC? I mean do you create objects in loops? If the speed gets slower, I can only guess that would be GC.

Xor (talk)04:05, 17 October 2017

Well, I do have many HashMaps and several kdTrees but the fact that Java kicks in with GC at random times is certainly an issue. The init round should be relatively low CPU event on my bot part.

Strange part is that robocode does not report 37 mS long execution as a skipped turn while my "official" cpuConstant is about 6mS.

Beaming (talk)04:34, 17 October 2017

Kd-trees and HashMaps is fast and GC friendly IMO. Did u tried recalculate CPU constant?

Xor (talk)04:47, 17 October 2017
 

You mean the robocode properties file says CPU constant = 6ms?

Xor (talk)04:47, 17 October 2017
 

Robocode gives robots extra time at the beginning of a round. In addition, most of this time is probably taken away by one-time setup and initialization code, making your calculated CPU constant longer than it should be.

MultiplyByZer0 (talk)21:03, 17 October 2017
 
 
 
 
 

Wow you made a bot that will try to be the worst. I have one too. Nobody can kill that bot=)

Dsekercioglu (talk)06:59, 8 September 2017

Fire power 2.95 bug

I was browsing the wiki and found a couple places where authors mention the fire power 2.95 bug. Can someone tell me what it is? I did my best, but I could not find a description or rationale behind this magic number.

Beaming (talk)03:13, 3 December 2015

It is actually an x.x5 power bug. If you look at the history of BasicSurfer/Code, you will see that the old method of comparing bullet powers to filter onHitByBullet to match to the correct wave multiplied both by 10, rounded, then checked for equality. This meant that due to rounding errors, some values of bullets with the pattern x.x5 would not be matched and the bot would not learn from the bullet hit. Since many bots used BasicSurfer as a base, this actually caused a noticable increase in rumble score.

Skilgannon (talk)09:07, 3 December 2015

Yeah... IIRC I was doing some extensive default bullet power testing and found that 1.95 drastically outperformed both 1.94 and 1.96 in a large test bed and this turned out to be why. I felt kind of dirty since I was both the person that introduced the bug and the one who figured out how to exploit it. :-/

Voidious (talk)23:41, 3 December 2015

Thanks guys. This clears it.

In retrospect, it is even strange to see the old based on rounding code. Is it a remainder of some attempts of code shrinking?

Looks like nobody wants to use buil-in robocode.util.utils.isNear :)

Beaming (talk)03:17, 4 December 2015
 
 
 

Head on gun in melee

Folks, I have an issue which drives me crazy. How come that HawkOnFire with only head on gun have a high bullet bonus in melee? My EvBotNG is in top 20 in melee, yet when I run a simulation with HawkOnFire included, the bullet bonus it gets is just a touch higher than for Hawk's. Yes, EvBotNG survives way more often, so the survival bonus compensates.

When I try to use only the head on gun myself, my performance became even worse than the Hawk's one. I looked at the code and Hawk has only the head on gun. So HOW does it pull this miracle?

Beaming (talk)04:53, 6 November 2015

I'll have to think about it more, but off the top of my head, are you using the same bullet power? HoF uses power=3.0 a lot, which is more likely to give high bullet damage bonus. Also, if your bot's more survival oriented, it might keep more distance from opponents, which would cost you bullet damage in general.

And big congrats getting to #14 in Melee! Some very tough competition that high in the rankings.

Voidious (talk)06:20, 6 November 2015

I think you are right, the average distance must be the parameter in question. How could I forget about it.

Also, thanks for warm words with my humble progress. One thing in my mind is anti Diamond gun :) It is amazing how your bot survives with relatively short motions around a given conner in melee. I am thinking about a gun which instead of the most common GF would aim at the most visited location. I think it would score quite well against Dimond. But from the other hand, it looks like your bot is quite unique in this regard.

Do you have any recollections if this was tried in the glory days of robocode?

Beaming (talk)03:14, 7 November 2015

I don't remember anything like that. There were schemes like that for movement, I think, with different areas being considered safe/unsafe.

One thing I found helped a lot in Neuromancer was the bullet power formula, and taking distance of the target I was aiming at into account. Close targets get lots of bullet power, further targets get less bullet power. If lots of targets are bunched together, they also get lots of bullet power.

Good luck. I've been tempted a few times over the last few weeks to pull Neuromancer out and implement some of the features that are missing. If I do that I'd also like to open-source the current version of Neuromancer, since it seems nobody else has seriously tried a surf-everybody strategy.

It's very interesting watching the different movement strategies of different bots - Diamond sticking in the corners and picking off anybody who approaches, Shadow finding empty spots and aggressively clearing out close bots, Neuromancer skipping through the middle of the chaos and somehow dodging bullets.

Skilgannon (talk)08:09, 11 November 2015

The bullet energy formula is probably the most important ingredient, but every time I touch this even a little, my score drops down like a rock.

I am actually surprised by how few bots aim at Diamond in melee. It dances in a corner with a relatively simple triangular path with short motions. I would think a pattern matching gun should follow it quite well. But yet Diamond tend to out survive quite a lot of bots. But my old pattern matching gun is so sloooow, I did not even put in EvBotNG. From other hand, Diamond seems to be quite unique with this corner hugging strategy, so having a special gun just for it would not bring a big score boost.

Also, I think my old EvBot is dodging everyones bullets, it did not do surfing in the canonical form of predicting the bullet hit position, but it did take in account every wave. With new bot,I am hitting the skipped turns a lot, since I do exact path calculation and precise intersection now. I have a feeling that I do it wrong and it should not take so much CPU. But iterative procedures sucks, I should pull out my linear algebra skills and do non iterative hit position calculation.

As for Neuromancer, this one tend to survive in impossible situations. I use to think that you knew some hacks and actually see the bullets :)

Seeing Neuromancer in the open-source would be great. I do not see what stops you, I would not expect literate copies from true developers. But it would be loverly to see the tricks you use.

But even more exiting, would be to see new wave of discussions which comes with new developments. The wiki seems to be very quite lately, with not much going on.

Beaming (talk)16:56, 11 November 2015

Strictly speaking, Diamond doesn't have any deliberate corner movement strategy, and he certainly doesn't have a hard-coded triangular pattern that he follows. It's just a well-tuned minimum risk movement with some randomness based on recent locations. So hopefully it looks more predictable than it is. :-) One thing he does is precisely predict a few ticks ahead to avoid hitting walls, so his wall smoothing is pretty graceful and that may help him stay in the corners (where it's safer) and still avoid recent locations. Looking at it now, his melee movement is actually a very small amount of code: MeleeMover.java.

As for what's been tried before - my attitude is generally that everything has been tried before ;), but that doesn't mean it's not worth pursuing. Those attempts could have been suboptimal or outright broken and the landscape of competition has changed over time. If I were climbing the Melee ranks, I would certainly consider specialty guns for the top bots. And I have certainly considered it for Shadow and DrussGT in 1v1. :-)

But no, I don't remember anyone trying that. oldwiki:AreaTargeting came to mind, but looks like something different. Maybe you could formulate something like a GF that's based on a corner / quadrant of the field, the way a GF assumes orbital movement and factors out orbit direction. Like polar coordinates from nearest corner, with 0/north being towards the center of the field.

Voidious (talk)17:18, 11 November 2015
 

The reason I didn't want Neuromancer to be open source was because I wanted there to me more than one implementation style of surfing for Melee. Shadow and SilverSurfer had two very different styles of surfing, but when rozu open-sourced Apollon, everybody took that style (true-surfing) and nobody touched goto style until I came along with DrussGT. Perhaps though, by open sourcing Apollon it kickstarted the wavesurfing movement for 1v1 which may have never happened otherwise. I'll release an open version of current Neuromancer in a few weeks, that will give me some motivation to get these features added =)

It is quite possible to get a fast Play It Forward - the gun in Neuromancer depends on it.

And I think you may be disappointed with how simple Neuromancer is =) I suspect those who read through it will go "Oh, obviously." and proceed to write their own equivalent from scratch, much like others have done with the BasicSurfer.

Skilgannon (talk)21:01, 11 November 2015
 
 
 
 
 

Just noticed, that my article hit it's goal:) Nice to see it:)

Jdev (talk)09:13, 17 September 2014

Yes, your article have changed me. I am not sure should I thank your or not :), since robocoding ate large chunk of my time.

Beaming (talk)14:53, 18 September 2014

I understand what you're talking about:) But practice shows, that after few years robovirus will be suppressed most of time and you will return to normal life, except rare relapses:)

Jdev (talk)15:20, 18 September 2014
 

I'm sorry, I seem to be missing some context. To what article are you referring?

Sheldor (talk)16:13, 18 September 2014

See User:Beaming's background. I can post link to this article, but it's on russian. It's story of Tomcat's climbing to PL first place

Jdev (talk)16:25, 18 September 2014

Please do. I don't know Russian, but using Google Translate should work well enough.

Sheldor (talk)16:37, 18 September 2014

Here it is

Jdev (talk)16:42, 18 September 2014
 
 
 
 

Strange regression in robocode 1.9.2 vs 1.8.2

Hi everyone.

I see strange regression in performance of my bot when it compiled with different version of robocode.

Specifically if I take EvBot v4.6.4 and compile it with robocode v1.8.2 I am getting about 20% higher score against let's say sqTank.waveSurfing.LionWWSVMvoid 0.01, However, when I compile with robocode v1.9.2 I see drastically worse performance. In both cases I run the battles with robocode v1.9.2.

Does anyone see similar behavior?

When I compare the stats of my more recent bot vs older v4.6.4 for example [4.6.12 vs 4.6.4], I see drop in a score against some bots which my bot use to be good against. There were quite a lot of changes and I originally thought it was due to experimental code, but now I am quite convinced that the major APS drop is due the change of version of robocode, since from v 4.6.5 and up I used robocode 1.9.2.

Any ideas?

Beaming (talk)05:11, 1 June 2014

Sorry for the noise. Above seems to be not true. Looks like I accidentally used wrong version of my code affecting the performance during the comparison. The robocode seems to be fine.

Beaming (talk)04:25, 2 June 2014
 

Gertjan1996 needs help

gertjan1996 need help using multiple java files in a robot for his School

Tmservo (talk)21:53, 21 May 2014

Bad bots in roborumble

My rumble client contaminated with a few bad bots: cb.mega.Remedy 0.2 uji.SiberianKhatru 1.0

They produce the following error message: Can't load 'uji.SiberianKhatru 1.0' because it is an invalid robot or team. May be we should comment them out from the participants wiki page? Alternatively, may be something wrong on my side?

Beaming (talk)04:15, 25 February 2014

Rumble Client

Hey, I just want to start by thanking you for running a rumble client. However, for some reason the results from your client for DrussGT are very different from what I'm getting with my local tests. I'm just curious what your setup is and what might be causing this.

Thanks

Skilgannon (talk)13:37, 8 February 2014

Hi Skilgannon,

There is nothing special. Rumble clients runs with a stock config, where only user name is set and I exclude Lo_Ian.Gandalf_V4*, since it seems to halt on my side. This 6 CPU machine, but only 2 are heavily used for roborumble and meleerumble clients, though last week I left It unattended and firefox start to consume a lot of load, so may be it somehow skewed CPU constant of the client.

If you have some ideas what could it be, I will be happy to investigate.

Beaming (talk)18:34, 8 February 2014

Things often brought up in rumble client discrepancy cases are which OS is in use, and what exact version of Java is in use.

Also, other possible relevant factors are:

  1. If CPU frequency scaling is enabled, as it is by default on most newer machines (this can cause problems for robocode's CPU time limiting being fair)
  2. If the system in question has high memory usage such that it goes into swap space.
Rednaxela (talk)21:09, 8 February 2014

I am not sure about CPU scaling. This is what ever Debian does to relatively modern AMD CPU. As for the swap, I am 99.99% sure it is not used. This machine has 16GB out of which only about 6 is used. If I look at memory usage, I seen only 3MB of swap used which is way to small for JAVA. May be the problem with linux scheduler, which constantly move application from one CPU to another.

There is one strange thing though, I notice that my meleerumble client usually crash within couple days, but roborumble is not. But this is true for another computer of mine as well.

Beaming (talk)21:27, 8 February 2014

FWIW I don't think anybody's measured the effect of dynamic clock speed on skipped turns, only speculated that it should have an effect. For all I know, the timings coming from Java on modern CPUs could have so much variance that the dynamic clock speed doesn't increase the number of skipped turns that much.

Voidious (talk)22:11, 8 February 2014

Here I run a single client with blank config for a while so it calculates CPU speed alone, before dynamic clock kicks in. Then I copy config to all clients and run them in parallel.

The effect is they assume a worst case in CPU speed and skipped turns occurs less often than it should. Which I think is better than occurring more often than it should.

MN (talk)05:03, 10 February 2014
 
 
 

Thanks for the offer. Could you test DrussGT 3.2.1 vs. Garm, Shadow, Hydra and Gilgalad? Functionally, 3.2.1 should be almost equivalent to 2.8.16 (3.2 has 3-series bullet shielding disabled), but as this comparison shows they are getting very different scores.

There are also a few bots DrussGT now gets 100% against - this worries me, as it points to them crashing.

Skilgannon (talk)07:28, 9 February 2014

What should be the setting on a client to perform such test? Or should I use RoboRunner?

Also my observation showed that sometimes short test set shows drastically (1% APS) higher scores. I was fulled by it couple times thinking that my older bot is more superior to a new version. They end up to have the same score or even reversed as I keep waiting for stats accumulation.

Beaming (talk)21:36, 9 February 2014

I compared DrussGT Version 2.8.16 to DrussGT 3.2.1 And 2.8.16 was way better http://www.2shared.com/photo/bKABJFb4/2816_vs_321.html

Tmservo (talk)00:10, 10 February 2014
 
 
 
 

What is wrong with meleerumble?

Looks like meleerumble is not updated anymore. If one goes to http://literumble.appspot.com/Rankings?game=meleerumble she would see that the most recent update was done on 2013-11-07 while today is 2013-11-16.

Beaming (talk)05:54, 16 November 2013

That's because nobody is running a client for it. You're welcome to run it yourself, unfortunately I had to stop the client I was running when I moved out of my old lab.

Skilgannon (talk)10:49, 16 November 2013

I find it impossible to run a client for meleerumble, robocode just crushes on attempt to submit results. I have no such problem for roborumble, which continues to happily accept my CPU cycles :)

But someting is broken in meleerumble robocode. Here is the relevant part of the console log for ./meleerumble.sh v1.8.2.0 run

RESULT = mld.Wisdom 1.0 wins, bayen.UbaMicro 1.4 is second.
Fighting battle 2 ... dq.Finity 0.2,jangs.ns51 1.0,simonton.micro.Sprout 1.1.3,baal.nano.N 1.42,ap.Frederick 1.1,amk.jointstrike.JointStrike 0.2,cs.sheldor.Talon 1.1,bvh.mini.Fenrir 0.39,arthord.NanoSatanMelee Beta,fullsail.TimbotNoPrediction 1.0
Can't load 'dq.Finity 0.2' because it is an invalid robot or team.
RESULT = simonton.micro.Sprout 1.1.3 wins, bvh.mini.Fenrir 0.39 is second.
Uploading results ...
OK. Portia vs Improved added to queue in 5ms
Exception in thread "Application Thread" java.lang.ArrayIndexOutOfBoundsException: 2
        at net.sf.robocode.roborumble.netengine.ResultsUpload.uploadResults(ResultsUpload.java:176)    
        at roborumble.RoboRumbleAtHome.main(RoboRumbleAtHome.java:137)

Beaming (talk)16:42, 16 November 2013

I just tried a client and it is working fine, so it seems like a problem with your install. Delete the /robocode/roborumble/files and /robocode/roborumble/temp directories and try again.

Skilgannon (talk)16:53, 16 November 2013

Thanks Skilgannon. Above recipe fixed the issue.

But I find it strange, I am quite sure that I was trying to run it from the fresh install.

Beaming (talk)18:20, 16 November 2013

There was an entry in meleerumble without link, Skilgannon repaired that. But to be sure that no old pieces are left behind, cleaning those directories is good practice.

GrubbmGait (talk)12:48, 18 November 2013
 
 
 
 

Something is wrong with meleerumble. My client fails with

Uploading results ...
Exception in thread "Application Thread" java.lang.ArrayIndexOutOfBoundsException: 1
        at net.sf.robocode.roborumble.netengine.ResultsUpload.senddata(ResultsUpload.java:293)
        at net.sf.robocode.roborumble.netengine.ResultsUpload.uploadResults(ResultsUpload.java:184)
        at roborumble.RoboRumbleAtHome.main(RoboRumbleAtHome.java:137)

On top of it the last upload was 11 hours ago and it was from me, but I think it start to misbehave by this time already.

Beaming (talk)15:53, 28 November 2013

I think it's your client again, same fix procedure as last time. You aren't running two clients out of the same folder, are you? It can corrupt the results file, which causes the client to crash.

Skilgannon (talk)16:05, 28 November 2013

I am quite convinced that it is on my side, but I was suspecting that others had the same problem, due to lack of uploads.

Same procedure did not fix it, but today I started it again after 8 hours and it works now.

I do run roborumble and meleerumble in the same folder, strangely roborumble is stable as a rock, while meleerumble gives me hick ups. Do you suspect some race condition and advise to use separate folders?

Beaming (talk)16:40, 28 November 2013
 

Never mind, looks like it works now.

Beaming (talk)16:26, 28 November 2013
 
 

what to do with about printing to much?

I am trying to print quite a lot while I am debugging my bot. But I see

SYSTEM: This robot is printing too much between actions.  Output stopped until next action.

I would like to see the whole output. Is there a way to force robocode to print everything, something analogous to debug mode where all graphics events are plotted.

Beaming (talk)15:55, 19 November 2013

One option would be to write to a file instead of printing debug output, particularly if you adjust the per-bot disk quota in the config file.

Rednaxela (talk)16:26, 19 November 2013

Do we have a code sniplet anywhere for writing into a file?

Beaming (talk)16:57, 19 November 2013

Kind of a weird link, but PEZ recently posted all his bots to a GitHub repo so that's fresh in my mind. See the robot.getDataFile() stuff: [1]

Voidious (talk)17:06, 19 November 2013

Thanks, I will look at it.

Is there a way to see if bot run with GUI or through a rumble client? So one can disable all of the debugs during rumbling matches to save on CPU and quotas.

One more questions, why rumble client needs a display under x11 environment? It does not use gui so it should run just fine.

Beaming (talk)19:48, 19 November 2013

I don't think Robocode exposes anything about the environment (app vs control API, graphics or not). You can tell if graphical debugging is enabled, though, because onPaint() will be called. You could use that as your debug switch. I mainly use graphical debugging and log severe stuff to file.

As for X11, I don't know, I would guess it's due to using some java.awt code internally or something. I have run RoboRumble over a terminal quite a bit, but maybe my terminal has the X stuff setup. (Mac OS X Terminal connecting to Ubuntu.)

Voidious (talk)20:13, 19 November 2013
 
 
 
 

Not sure if it's useful, but a secondary suggestion is to try debugging graphics. Some info that takes a lot of text is very simple and clear in graphical form.

Voidious (talk)16:37, 19 November 2013

I am slowly drifting in the direction of graphics debug. But it certainly harder to do right compared to simple text output.

Beaming (talk)16:56, 19 November 2013
 
 

Hey, a quick request with EvBot, could you only have one version in the rumble at a time? If you want to compare their scores you can do that in the details page.

Also, just a suggestion, you will get stable rumble results quicker if you make EvBot execute faster. Right now it runs quite slowly, and while that *is* allowed, it makes testing quite a bit more painful ;-)

Skilgannon (talk)11:54, 14 October 2013

I removed the second entry and apologize for such misbehavior.

Can you suggest a tool to test my bot offline? I use RoboRunner, but I see its result different from robocode graphical interface, and even then it is different for paint on/off cases. I am guessing it is doe to the fact that my bot is super slow, and has more skipped turns in no gui environment.

Beaming (talk)13:45, 14 October 2013

Don't worry about the second one, lots of people do that when they're starting out. It's just better to have one so the rankings don't get as full, and "top 100" actually means "top 100" etc...

Yeah, that sounds like you're dealing with skipped turns problems. RoboRunner should give you similar results to the main RoboRumble (at least, in pair-on-pair scores). If you minimise the graphical interface do the results look similar to what you're getting with RoboRunner? Also, make sure you aren't running too many simultaneous threads in RoboRunner, if you're running more threads than cores it can impact skipped turns pretty heavily.

Why don't you make a page for EvBot where you explain what your general strategy is, we might be able to help you with getting it to run faster and avoiding the skipped turn problems.

Skilgannon (talk)17:34, 14 October 2013
 

It's also worth noting you can always revert back to a previous version in the rumble and it retains all its battles. And you can compare to previous versions forever. So you don't lose much by having only 1 version at a time on the participants list.

Nice to hear you use RoboRunner! :-) +1 to what Skilgannon said about those issues. But it's also true that there's almost no replacement for entering your bot in the rumble, so don't stress too much about posting versions frequently. Unless you've got quite a few cores to throw at Robocode, you'll get more battles in the rumble, and you may get slightly different scores on different machines / JVMs too.

Voidious (talk)21:10, 15 October 2013

Guys, thanks for your suggestions. My windows manager does not has a minimize button so I am unable to see the difference with minimized robocode. But I remove my slow code and everything seems to be way more predictable. The slow code is a pif gun with which I step in all beginners pitfalls myself.

Voidious, I have a suggestion for RoboRunner installer. Do not copy robots folder, just link to it if you think it is a must. At first I was shocked how long the initialization was performed and then I saw 500MB directory for each thread.

Beaming (talk)01:16, 16 October 2013

The whole separate "robots" directory thing is not for no reason: In order to avoid conflicts between the running instances, I'm pretty sure one needs a separate "robot.database" file, and separate ".data" directory inside it. Unless I'm missing something, without modifying the robocode engine a little there's probably no way to do that with (sym)links, except by creating a link for each and every jar file, which would be a touch silly in some ways.

To improve that without symlink-spam I see one of two possible approaches:

  1. Modify the robocode engine to separate the "robot.database" and ".data" directories from
  2. Modify the robocode engine so it'll read jar files nested in a folder in the "robots" directory (thus allowing a simple symlink to a central robots directory)

For this purpose, personally I'd lean toward the first option simply because symlink APIs/commands are slightly less cross-platform (though I'd personally also not mind #2 supported for other reasons)

Rednaxela (talk)01:48, 16 October 2013

Doh! Sorry about that Beaming. The script is a little quick and dirty compared to what it could be. I haven't touched RoboRunner in a while, but I'll take a look when I get a chance.

@Rednaxela - It's true you need separate robots directories for each install, but I can indeed fix the issue Beaming describes with a more sophisticated shell script. The RoboRunner setup script just uses "cp -r" to clone X copies of whatever Robocode install you point it at, which means it makes X copies of the robots dir. But when using RoboRunner, it copies bots as needed from the "bots" directory into the Robocode installs, so it would be fine to leave them empty during setup.

Voidious (talk)01:57, 16 October 2013