View source for Talk:RoboRunner

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
Inconsistent APS with LiteRumble509:47, 7 April 2024
RoboRunner GUI423:11, 9 December 2013
Speed vs RoboResearch207:52, 4 December 2013
Computing Seasons Completed With Smart Battles017:52, 26 December 2012
Bug in saved scores?102:05, 25 December 2012
UI400:41, 30 November 2012
Higher portability213:31, 22 October 2012
Support for team battles420:29, 30 September 2012
Can i ask you to have a look at ....908:45, 28 August 2012
Possible bug report1322:17, 23 August 2012
calculating confidence of an APS score1202:31, 15 August 2012
smart battles916:15, 13 August 2012
priorities709:24, 1 August 2012
Congrats!617:21, 28 July 2012

Inconsistent APS with LiteRumble

Quoted from APS:

The server calculates APS for each bot by:

  1. taking the average percentage score of all battles against each opponent separately to get an APS for each pairing,
  2. then averaging all pairing scores to obtain the final average.

Formally, it's

mean(challenger_score / total_score)

for each pairing.

LiteRumble seems to follow this algorithm.

However, RoboRunner is using:

sum(challenger_score) / sum(total_score)

instead for each pairing.

This causes scores calculated by RoboRunner to be different from LiteRumble.

Xor (talk)07:28, 22 December 2022

Here is a Python script that calculates APS correctly. Use this script when you want to align with LiteRumble.

import os
import gzip
import xml.etree.ElementTree as ET

from statistics import mean
from collections import defaultdict


roborunner_dir = os.path.expanduser('~/roborunner')  # replace with your roborunner directory


def get_aps_dict(bot):
  scores = ET.parse(gzip.open(f'{roborunner_dir}/data/{bot}.xml.gz', 'r')).getroot()

  aps_raw = defaultdict(list)

  for bot_list in scores:
    for battle in bot_list:
      battle_scores = {}
      total_score = 0

      for robot_score in battle.findall('robot_score'):
        name = robot_score.find('name').text
        score = int(robot_score.find('score').text)
        battle_scores[name] = score
        total_score += score

      aps = 100 * battle_scores[bot] / total_score

      for name, _ in battle_scores.items():
        if name != bot:
          aps_raw[name].append(aps)

  return dict((name, mean(aps)) for name, aps in aps_raw.items())
Xor (talk)08:49, 22 December 2022
 
Edited by author.
Last edit: 09:21, 6 April 2024

Update: The APS fix is now included in the newest release of my fork. A PR is also made to Voidious’s version.

Xor (talk)09:52, 22 January 2023

Did this fix also apply to melee score calculation? I've been having difficulty getting my offline results with roborunner to align with literumble for the nano melee rumble.

D414 (talk)09:06, 6 April 2024

Yes, the definition of APS applies to all of the rumbles.

Xor (talk)09:20, 6 April 2024

I think I've solved this now, it looks like the problem was with the way I was generating battles.

D414 (talk)09:47, 7 April 2024
 
 
 
 

RoboRunner GUI

I wrote a custom GUI for RoboRunner, it directly interfaces with a modified RoboRunner. It is in early early alpha. The feature set is nowhere near complete. Please let me know of any bugs.

Robots are not automatically copied to robocode directories. Nothing is saved. Paths cannot (at the moment) be altered. Results do no update automatically, they need to be closed and reopened to update them (I plan to fix this). Thread count cannot be changed after starting (for now, changing this would be involved).

https://github.com/Chase-san/RoboRunner-GUI/releases

Chase22:43, 7 December 2013

I guess I should mention its advantages over RoboJogger, the other GUI for RoboRunner, also over RoboRunner itself.

It has a queue that runs challenges in whatever order they happen to be in the queue. This means once you start it, you can run multiple challenges without robocode needing to restart, and you can also reorder the queue after starting the battle threads.

It is better to consider it's limitations compared to RoboResearch. Aside from the first post, you cannot start and stop threads individually. I may support this eventually, but at the moment it would require rewriting RoboRunner's BattleRunner class. Which also means once you start the threads, you cannot alter the number of threads running.

Chase13:34, 9 December 2013
 

Congrats on the first release! I'll try to have a look tonight or tomorrow.

Do we need any setup instructions? Or are the roborunner.properties and empty Robocode installs already in the zip?

Voidious (talk)20:30, 9 December 2013

I am working on automating the runner creation and robot jar copying process today. As well as adding an options dialog to configure the paths and jvm arguments.

The GUI won't use the properties as the standard roborunner. But the current version doesn't use any at all.

But for the current version, you need to create the robocodes/r[0-9]+ directories yourself (or use the script you wrote). You can pick the thread count in the program itself (but only up to the number of runners you created). Make sure to copy the robots into robocodes/r[0-9]+/robots directories as well.

Chase21:54, 9 December 2013
 

v0.9.2-Alpha

Adding options (that save), automatic robocode runner creation (based on threads), and automatic robot jar copying.

Chase23:11, 9 December 2013
 

Speed vs RoboResearch

I'm curious what kind of speed improvements people have seen, if anyone keeps track. Today, in the midst of developing my own UI for RoboRunner, I decided to do a test run to compare RoboRunner vs RoboResearch. I ran MC2K7 Fast Learning challenge against XanderCat 11.6. All data was cleared from the robots directory in both tests. 2 Threads in both cases. RoboResearch ran in 5:30. RoboRunner ran in 3:50. That's about a 30% speed improvement -- pretty dramatic. With dirty data directories where there is a lot of past data, I bet the speed difference would be an order of magnitude more significant. RoboResearch can get pretty slow starting up battles when you haven't cleared out the data for awhile. Results will vary based on the data directories and what kind of robots are being run, but regardless, I think anyone using RoboRunner will see a noticeable speed boost. Very nice.

Skotty01:46, 10 December 2012

RoboResearch has an annoying problem when bots print something in the console. The clients crash and the threads become stuck in paused mode. You have to recreate the clients manually. So, RoboResearch is a lot more than 30% slower.

MN16:55, 30 December 2012

I actually hopefully just fixed the problem about the threads pausing. I still use RoboResearch since getting RoboJogger/RoboRunner to cooperate is difficult at times.

If possible I wouldn't mind hammering RoboRunner into RoboResearch, since aside from a few bugs it works rather well.

Chase07:52, 4 December 2013
 
 

Computing Seasons Completed With Smart Battles

How should we be computing the number of completed seasons with smart battles?

Skotty17:52, 26 December 2012

Bug in saved scores?

When scores are loaded from the ScoreLog, the bullet damage score (which the BULLET_DAMAGE score type relies on) is always 0 for the opponent. Is this a bug in RoboRunner -- either the scores not getting saved correctly or loaded correctly by the ScoreLog? I also noticed that the energy conserved values were also 0 for both the challenger and opponent, so may be a related bug there too.

Skotty16:28, 24 December 2012

Okay, this wasn't a bug. It was because it was an opponent who wasn't firing bullets.

Skotty02:05, 25 December 2012
 

Please provide a simple UI for this. All attempts I have made to run this on windows have met with extreme aversion and then failure.

I got it setup, installed, and finally detecting the correct classes (multiple reasons why this was failing). But then it constantly complained I was not defining the robot to run, the challenge or so forth. I was do all of those, in the exact same format presented in the help examples. I made sure. In the meantime I have switched back to RoboResearch, which works.

It doesn't have to be a comprehensive UI. A simple program that runs the console command for (to remove human error) would be perfectly acceptable. Probably two file browse boxes and a number spinner for seasons.

That way we know its a problem with the program if it doesn't work.

Chase09:15, 27 November 2012

I could probably build a Java UI to launch it. Skotty

Skotty19:16, 27 November 2012
 

I'm now working on a Java user interface for RoboRunner that I call the RoboJogger UI for RoboRunner (or just RoboJogger). I have past experience writing Java Swing applications and also prefer having a UI rather than just a command line tool, so this is a good project for me. I don't have nearly as much free time as I would like, so it may be a few weeks before I have it up and running, but I can keep anyone who is interested up to date on my progress. Skotty

Skotty07:51, 29 November 2012
 

Would be very cool to see a nice UI, please do keep us up to date. :-) I think the main thing I'd want to make sure a UI could do is to queue up multiple runs at a time, which is easy to overlook and comes for free (with shell scripts) with the command line version.

When I was considering a UI, my main idea was to make it a web interface. In part because that seems simple, portable, and like something I know how to do, and also because it would offer remote monitoring and access for free. I know I like to check on/alter long running tests while I'm out sometimes. But others may not be as OCD as me and prefer a more native UI.

Voidious17:40, 29 November 2012
 

A WebUI doesn't actually seem like that bad of idea actually.

Chase00:41, 30 November 2012
 

Higher portability

Just peeked at RoboRunner distribution package a while ago. It would be nice if initial setup was made inside java code instead of a shell script. So Windows users don't have to port the scripts.

MN15:24, 21 October 2012

Argh, sorry about that. When I wrote the setup, it made a lot of sense to use a shell script instead of spending 10x the time to write it in Java, since I wasn't even sure anyone else would ever use it. But at this point Windows setup support is probably the most glaring omission. A batch file might work, but I don't have Windows to test on so I guess I should go with Java.

Voidious18:47, 21 October 2012

I have Windows, but wasn´t willing to port de scripts. :P

Laziness combined with a working RoboResearch setup makes everything harder.

MN13:31, 22 October 2012
 
 

Support for team battles

Does RoboRunner support team battles (1200x1200 battlefield)? Or does it support custom battlefield sizes?

Trying RoboResearch I noticed battlefield sizes are hard-coded at 800x600 or 1000x1000, and there is no place to configure battlefield sizes in .rrc files.

MN18:59, 30 September 2012

Yep, both work fine. To modify battle field size, just add width and height on their own lines after the "num rounds" line in the challenge file. And if your team JARs are in the bots/ dir, just specify them like you would a bot in the challenge file. So e.g.:

My Teams Test Bed
PERCENT_SCORE
35
1200
1200

abc.ShadowTeam 3.83
gimp.GimpTeam 0.1
Voidious19:10, 30 September 2012
 

Ouch, I may have spoken too soon. I'm seeing an exception when I try to run a team battle right now. I'll try and see if I can figure that out, I don't think I needed to do anything special originally to support teams.

Voidious19:29, 30 September 2012
 

Ok, figured it out. Turns out Robocode is a bit confused between getRobotNameAndVersion() vs getTeamLeaderName() in results of a team battle. So I need to use getTeamLeaderName() instead (and for non-teams they return the same). I'll post a fix now.

Voidious19:59, 30 September 2012
 

Ok, posted 1.2.3 with a fix.

Voidious20:29, 30 September 2012
 

Can i ask you to have a look at ....

Hi mate. I'm not sure if i can bother you to have a look at the roborunner changes i made. Maybe the development state is a little to early, but i would like to know what you think.

What is new:

  • configuration - included, no need for external scripts and Windows should be supported as well
    • just type CONFIG and go through the options
    • this will make all internal robocode directories (depended how many installations you want)
    • it should be quite fail prof and checks the input for validity
    • you also can re configure it if you want to switch to another robocode version or something
  • challenges - can be switch on the run
    • type CHAL to go through the options
    • the challenge file format should be the same as it where before
    • all missing bots will be copied to the instances (well not new but it works like before)
  • the processes stay initialized
    • that means if you have once started the instances they are ready to take more battles after the challenge is over
    • or you can switch to another challenge and run it on these instances as well
  • you can stop running challenges
    • if you type STOP while the challenge is running all processes stop the current battles and can be restarted (they stay initialized)
  • with DEBUG you get additional informations (about the messages and some standard output from the processes) - i plan to make this configurable so you can see what you want
  • with AUTORUN the next time you start the program it takes the last configuration and challenge and runs it automatically
  • with STATUS you get the configuration and running state of all processes
  • HELP shows some help (not much yet)

What is not ready yet:

  • everything with result output is not finished yet
  • the results are coming back from the processes (and will be printed to the console) but there is no processing on these informations right now
  • i plan to take your code and then it will be possible to print results with whatever output you prefer (also offline results)
  • basically this means - you can say: show me result bla (avgDmg,score,cats,dogs - whatever) and it will be extracted from the current available results

I would be interested if it runs with all the cpu you have and has no concurrency issues. And if the usability is ok.

I had to change quite a lot but it has still the spirit of your RoboRunner and should work as same as yours. It is based on a very basic communication protocol to make it extendable for later needs.

You can find it here: roborunner_wompi.zip. To start it just run the ./rr.sh (same as yours)

If you don't trust the class files, the sources are included or available from GitHub as well.

You don't have to play much with it, just a quick start and configuration and one of the example challenges would be great. This should only take a couple of minutes.

Take Care

Wompi14:45, 27 August 2012

Cool, I'd be happy to take a look! I particularly like the idea of having a Windows-compatible setup. I think my main concerns from your changes are:

  • If RoboRunner stays running after finishing a challenge, that would screw up how I often use it, which is having multiple dev versions queued up in separate runs via shell script. So I'd either need to make that configurable, or also add support for running batches (which I guess would also be necessary if ever we have a GUI).
  • The interactive commands sound really powerful and I would definitely use them :-), like when a dev version is tanking and I want to just kill it and move onto the next one I have ready to go. But I also like simplicity and not having a big learning curve to using RoboRunner, so I just want to make sure it doesn't feel like "you have to learn a bunch of commands" in order to use it. So I'd like to make sure you can get by without knowing them, and/or that they're really easy to find and learn about. A lot of how I use RoboRunner is queueing up a few runs and leaving it for hours, so I definitely want to keep full support for non-interactive batch runs too.

Thanks man! Very cool to have someone else using and contributing to this. :-)

And I promise I have not forgotten about the custom scoring, I just haven't gotten to trying out some of my ideas with it. I was curious if you're still using that in your development? And if so, what kind of stuff do you collect and how do you like it?

Voidious15:09, 27 August 2012
 

Yes i rewrote most of the original stuff to be highly configurable. I'm a little unhappy with all the changes right now but i hope in the end it will pay off to have something really nice to run test beds. It's fairly easy to provide batch runnings with multiple dev versions. I just have to make the challenger input ',' separated and then it runs all challengers against the current challenge. Or maybe a input file where all challengers are included (linked to whatever challenge).

I guess i use RoboRunner in a sightly different manner right now. While writing on my bot i make a quick dev jar and let the runner make a couple of battles against my test bets. Therefor i can still make changes - and in the background the first results can show me if i was wrong or if i'm on the right track with my changes. That's why i wanted the processes alive. What i had in mind was having RoboRunner running infinitely and if a new version arrives he just grab it and runs the challenge against the new bot version . I also can switch the challenge on the run so if i think i need another view of my development state its just one switch to the console. I'm defiantly on your side of having RoboRunner making its stuff without maintainance. The console is just to have a tool to make changes if you think you have to. And beside of the config stuff its just a run command now.

The main reason because i switched to a communication protocol, is having the possibility to improve later versions with more fancy stuff like diagrams on certain battle states and such stuff. I don't know if you run some melee tests as well but for me its better to have just a couple of precise test beds rather than just let the challenger run against everyone above a certain level. I guess this will be more important if i go for an appropriate 1v1 strategy (someday :)).

About the custom scoring, its more custom battle field statistics for me. It's still one of my main targets for RoboRunner. Some of my statistics gave me a quite nice view of whats going on on the battle field and for what i should watch out. Like average field population (where are the most crowded spots and how was my survival at this spots) or bullets fired far away from me with more as 6 opponents on the field (how much did i catch a hit of these bullets and where would be a better place to stay). Yes its quite a bunch of other stuff to and most of it is just not worth it but, you know, sometimes you have to think strange.

Wompi17:11, 27 August 2012
 

Oh no - real tabs, braces on their own lines, lines over 80 chars?! :-) It's funny, at my last job, almost every file had its own different code style, so I was very flexible. My current job is much more rigid on code style, and now I realize I've also become more rigid... Might have to reformat some stuff, at least in the main package. :-) I just took a quick look for now, though. I'll look more and test out your stuff when I get home later. Btw, should I hold off playing with the code much if you're still making major changes?

The first thing I wanted to do with custom scoring was to track start positions for Diamond in 1v1, to see if there was any pattern to which rounds he lost. Like, when he loses 1 random round out of 35 to Raiko, is it because we started near each other? Or I started in a corner? Or I didn't start in a corner? Or is it just always the first round before we have much data? Seems like there could be a lot to gain just shoring up some of the "unlucky" stuff that can happen in a battle. But yeah, I realized most of the useful stuff you'd do with it would be custom stats like that, not traditional "scores" like with percent score or bullet damage. Passing values back from RoboRunner and storing them in the XML should be no problem, I just want to come up with a nice/simple/flexible API for the listener to log the values and RoboRunner to format them. I might want to peek at some of your code for the dynamic class loading, since I haven't done much of that before.

Voidious17:59, 27 August 2012

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:RoboRunner/Can i ask you to have a look at ..../reply (5).

 

Ah, very good points. The Musashi trick shouldn't last long (stops as soon as they're hit once), but certainly the 1+ rounds of stop and go would screw up my guns. Maybe it's worth special casing that and clearing gun data once you detect the switch.

And the relative fast-learning of their guns in early rounds isn't something I'd thought about much. Maybe there's a place for some light flattening early on as soon as you know they're using something besides simple targeting.

I've just been thinking a lot about all these bots that take 1-2 rounds off of Diamond (and DrussGT). If you're winning 95% of rounds vs a given bot, maybe you’re just a little consistency away from 99%. And it could be from simple stuff, like starting a round cornered or too close. But it might take some real research to figure out some causes (or just give up and accept "randomness" =)).

Voidious21:38, 27 August 2012
 

Hehe the formatting style discussion i know all to well :). I have no problem if you reformat it to whatever you think is appropriate, i'm used to read all kind of code style (working on it is another discussion :) ). I don't know if you develop with eclipse but if so, just give me your formatting file and we will see how it works :). I started programming when all these formatting rules made sense (80x40 terminals). You wouldn't come far with braces at there own line and lines over 80 chars but these days are long long gone :) and with todays monitor resolutions i don't see why i shouldn't use it. I'm surprised that you still have discussions on code style at work. We had some check in format at work and every one had to use it before checking in files to cvs. If you check out the files, just format it to whatever you like and thats it. Maybe i can convince you to give up the 80 char per line rule, because with all these method().method().method() calls it is quite hard to maintain a readable code line. Well, like said format to whatever you think is good :).

If it comes to changes, don't hold you back do whatever you want (in your branch or in mine no matter). This Git stuff is way more fail resistant to merges than cvs/subversion and i have no doubt that i can handle the changes. The code right now is still draft status and i haven't looked where i could bring some things together or should more open and bring it apart. I just wanted to bring it to life and start the improvements from there. I'm thinking of changing the communication anyway to RMI or TCP but for now i'm fine with the process in/out like you did. It's also ready to take a GUI (but thats not really something i'm thinking of right now). I'm not sure if i have the dynamic stuff included right now i guess it is still in another project but i think i will bring it over within the next couple of updates.

Reading about Diamonds start positions made me think if it would be rewarding to use the start position feature of the RoboCodeEngine. This would give you the opportunity to set all kind of interesting start positions and just rumble it out.

Wompi19:21, 27 August 2012

I basically use the default Oracle/Sun format and style guide, just with 2-space tabs (no real tabs). 2-space tabs makes 80 char lines a lot more reasonable, too. I generally don't auto-format because some stuff is still a judgement call, like breaking lines in the clearest way.

I certainly learned a long time ago that code style is just something you have to compromise on to get anything done in a collaborative setting. =) I didn't quite realize I’d become so accustomed to one Java code style until browsing your code. And yeah, I was also surprised at first that code style was so enforced at my current job. It was an adjustment after having the exact opposite situation at my previous job (just stay consistent within a given file). But now I like it, and/or am brainwashed. At a large enough company with a lot of code sharing, it kind of makes sense to just settle on something. Deep down I’m not actually a psycho about code style, but maybe keeping it consistent within each package makes sense, and wouldn't be too painful for either of us.

Voidious21:05, 27 August 2012
 

Ok, gave it a shot and everything seems to be working fine with 8 threads on my Linux box. Actually I like the feel of this environment more than I expected to, it's very cool. And I like the extra stuff you save in roborunner.properties now. Looks like some of the output is just incomplete for now (I didn't see overall scores at all?), and there's definitely plenty of room to polish up the usability side of things, but that stuff's a pleasure to work on once you have all the core stuff working. =)

I'd be happy to take a pass at improving some of the usability stuff, or just writing up a list of ideas, but maybe I'll wait a bit if you've still got some stuff in progress with this. Nice work man!

Voidious01:11, 28 August 2012
 

Thanks man. Sure thing, your improvements and ideas are very welcome. The current design is open for quite a lot of directions.

Like said, everything that relates to score processing (collect,save,average,overall score,smart battles) is not implemented yet. The results you can see, are the results of every battle and contain all available data field of onCompleted() BattleResults. They just have to be parsed and processed now. I just wanted to have the config, chal and messaging stuff to be ready because i know myself to well. Once implemented i probably don't touch it ever again because i don't like writing code around path, file and input checks. After thats finished now i can spend all time in doing the result/statistics stuff smooth and fluffy. Thats why i wanted to have your opinion about it, to make the related changes now.

Wompi08:45, 28 August 2012
 

Possible bug report

Heya Voidious,

I think I may have found a bug.

I finished a run of deBroglie rev0130 last night on the test bed you made for me. Score was in the lower 80s.

Just now, I manually made a .rrc testbed with some high performing bots. Started running it, and here's the output. Looks like RoboRunner is carrying over the score from the other challenge file?

~/roborunner $ ./rr.sh -bot tjk.deBroglie rev0130 -c debroglie_mega.rrc -seasons 20

Copying missing bots... 0 JAR copies done!

Initializing engine: ./robocodes/r1... done!

Initializing engine: ./robocodes/r3... done!

Initializing engine: ./robocodes/r2... done!

Challenger: tjk.deBroglie rev0130

Challenge: deBroglie Megabot test

Seasons: 20

Threads: 3

 tjk.deBroglie rev0130 vs lxx.Tomcat 3.67c: 39.79, took 57.6s, avg: 39.79

Overall score: 81.16, 170.42 seasons

 tjk.deBroglie rev0130 vs voidious.Diamond 1.8.1: 31.91, took 72.3s, avg: 31.91

Overall score: 80.83, 170.5 seasons

 tjk.deBroglie rev0130 vs jk.mega.DrussGT 2.7.3: 37.2, took 82.0s, avg: 37.2

Overall score: 80.54, 170.58 seasons

Tkiesel17:13, 26 July 2012

Yep, it seems I'm printing the overall score for every bot you've faced, not just the ones in the current challenge file that's loaded. I'll see about fixing that later today. You can just delete (or rename for now) the file from the data directory if you want to start fresh. Thanks!

Voidious17:22, 26 July 2012
 

Or you could keep/copy just the lines for those bots in the data file, if you feel like mucking with it.

Voidious17:33, 26 July 2012
 

Ok, posted the fix in 1.0.1: [1] Only things to update are the RoboRunner JAR and rr.sh which points to it. It was just a problem with the output, so things should work fine with your old data file, if you still have it.

Voidious21:20, 26 July 2012
 

Hi mate. I got a little Exception :)

java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ArithmeticException: / by zero
	at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
	at java.util.concurrent.FutureTask.get(FutureTask.java:83)
	at robowiki.runner.BattleRunner.getAllFutures(BattleRunner.java:95)
	at robowiki.runner.BattleRunner.runBattles(BattleRunner.java:80)
	at robowiki.runner.RoboRunner.runBattles(RoboRunner.java:338)
	at robowiki.runner.RoboRunner.main(RoboRunner.java:89)
        ...
Caused by: java.lang.ArithmeticException: / by zero
	at robowiki.runner.RoboRunner.printOverallScores(RoboRunner.java:485)
	at robowiki.runner.RoboRunner.access$4(RoboRunner.java:466)
	at robowiki.runner.RoboRunner$3.processResults(RoboRunner.java:734)
	at robowiki.runner.BattleRunner$BattleCallable$2.run(BattleRunner.java:197)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)

One question. If i fork RoboRunner to my GitHub repositories and make changes, does it mean i have a new project or is it more like a separate branch and we could merge some changes i made?

Take care

edit: stupid me, i posted just the head

Wompi10:26, 22 August 2012
 

Seems like this would only happen when printing the overall score for 0 battles? Is it possible that was the situation? If so I'm not as worried about it being a bug, but we should check for it and print something nicer. If it shouldn't have had 0 total battles, then it's a deeper problem with the score tallying I guess.

This is my first experience with GitHub, so I don't know for sure, but I'm pretty sure the main idea behind forking is for you to make changes and then I can pull them back in. I think you issue a "pull request" once you've made your changes. I also think it can function fine as a new project if you don't ever intend to merge back.

Voidious14:32, 22 August 2012
 

Yep the problem is deeper. It looks like, if i had no battles before everything is fine (not 100% sure). Then, if i restart the test run this Exception comes up. I run it with 20 seasons (melee).

I made just a quick fix for me, so i can still use it. The only thing i lost was the 'Overall' score output - but i'm fine with the 'Average' output.

I have forked your repository and made a new brunch from the main branch, not sure if you can see it on your side to. The only thing i changed so far is the output of the melee score (just formatting). Yes, i guess i will use it mainly as new project and tweak it to my needs, but i thought for little bug fixes it would be easier to just merge the branches.

Wompi14:59, 22 August 2012
 

So you can see the latest and average score for each battle, but overall score throws that exception? How strange. Could you post your data file somewhere so I can try to reproduce? That would be super helpful. (roborunner/data/package.BotName version.xml.gz)

I'd certainly like to pull back any bug fixes or awesome new features. =) What's your melee score output look like? I've used it for Melee a little but mostly 1v1, and even for Melee I tend to focus on overall score, so I'm open to suggestion. I've also considered a -verbose option (or something) for printing extra scoring details, like survival/bullet damage even when you specify APS as the scoring style.

Voidious15:18, 22 August 2012
 

Yep give me a second i rollback the fix and give you the output file ....

Wompi15:42, 22 August 2012
 

Ok there it is RoboRunner-bugtrace.zip. Looks like i was wrong it happens straight from the start. I deleted all xml files and the output of the first run is shown in the zip file. Maybe it helps :). Let me know if you need more. i broke the run after the second season with 'CTRL-C'.

Well, i just made the melee output a little more 'eye' friendly :) but i guess i will enhance the output to something that i use in my other outputs in the next days (nothing serious just a little more info on bullet hit ratio of all bots,some movement stats, sorted output of APS and a table of all bot score to each other). Based on an early RoboRunner version i rewrote it to a console like application. So basically you start the program and use console commands to configure,run.output some stuff. Unfortunately does it not use multiple threads and i'm now back to the latest RoboRunner and maybe i can merge the two somehow.

I think if you look in GitHub at Network you should see the forks that go of of your main branch.

Wompi16:18, 22 August 2012
 

Great, thanks! I was able to duplicate it here and figured out the problem. RoboRunner gets confused by having 2 of the same bot in a battle (mld.DustBunny 3.8 in this case). It looks like BattleListener eats the result right away when it builds a map of scores by bot name/version. (Edit: So RoboRunner has zero scores for the actual bot list when it tries to calculate overall score.) I have to head out in a few minutes, but I'll try to get a fix out later tonight or tomorrow.

Voidious00:03, 23 August 2012
 

Ok, I think it's all set. Tested it with the challenge you provided, dropped it into my currently running melee test a half hour ago and that still looks right, and did my usual round of manual setup and tests. The fix was mostly pretty easy thanks to Guava's Multimap stuff, but it also led to some minor refactoring so that nothing is based on looking up a score only by bot name, besides the challenger bot. I think it should work fine even if the challenger is also a reference bot, even though that seems silly - the first score for that bot in each battle would be considered the challenger score.

Hopefully it won't be too painful of a merge for you. ;)

Voidious05:08, 23 August 2012
 

Yep works fine. Thanks. It wasn't supposed to have two of the same bots within the challenge :) - i realized that i just took an old challenge file while switching to the new RoboRunner version. But i guess in this case it was luck to detect the bug.

I tried yesterday to make the challenger a development bot. I changed the copy bot function to let bot names with ..* through but somewhere it lost the name. Can you give me a hint where the bot name comes back from the process? The RobocodeEngine can work with development bots if the properties contain the right path. What it does, if you give it, lets say wompi.Wallaby* , it changes it to wompi.Wallaby* 4.7 for the result output (this works so far). Now i thought i just change the name back to my original (wompi.Wallaby*) within the BattleResultHandler (i guess this is where the results are coming back from the process) and could work with development bots. It was just a quick try and i will try it today more seriously, but maybe you have a quick solution. I guess you are more used to your code and could say where it stores references between name and score. The sad thing is even if i'm giving it the complete name (wompi.Wallaby* 4.7) it doesn't work :(. I guess somewhere the "*" is a limiter or gets lost. Please don't put any time in this, it just would be nice if you have a quick hint.

I have to admit that this GitHub stuff is very neat. It's so easy to work with - thanks for pointing me at this by releasing RoboRunner over GitHub. I'm a little more used to it now and figured out how the forking works.

It's basically: fork your origin -> my origin clone to local (optional) make branches add your origin as remote (this keeps me up with changes at your side) merge remote -> my branch push my branch -> my origin (optional) make a pull request to you

It's pretty straight forward and with GitX you have a nice graphic view about the branches to :)

Take Care

Wompi10:47, 23 August 2012
 

I don't think the name should be interpreted as a regex anywhere or anything like that. I think that whatever comes back from robotResults.getRobot().getNameAndVersion() in BattleListener should be handled by the rest of the code OK. The other points of concern that come to mind are:

  • Copying the dev bot into the Robocode install directories means copying your package dir and classes into the robots dirs of each Robocode install, which is not as simple as copying one file. (Unless you have them all configured to look at some other directory?)
  • Assuming Robocode can find it, checking whether the dev bot you specify is actually running in the battles.

For the second point, you could try:

  • Modifying BattleProcess to do _engine.setVisible(true), so you could see the battles that get run.
  • Run robowiki.runner.BattleProcess (with -path to Robocode, -rounds, -width, -height) and try running battles with your dev bot. BattleProcess is a command line application where you can type in a comma delimited list of bots (like "jam.mini.Raiko 0.43,voidious.Diamond 1.8.1") and it runs the battle and spits out the result.

And yeah, I'm liking GitHub too! I know PEZ is a big fan, though he's not doing Robocode stuff. I didn't know about GitX, I'll have to give that a shot. Maybe it will encourage me to make better use of branches. ;)

Voidious22:17, 23 August 2012
 

calculating confidence of an APS score

Hey resident brainiacs - I'm displaying confidence using standard error calculations on a per bot basis in RoboRunner now. What I'm not sure of is how to calculate the confidence of the overall score.

If I had the same number of battles for each bot, then the average of all battles would equal the average of all per bot scores. So I think then I could just calculate the overall average and standard error, ignoring per bot averages, and get the confidence interval of overall score that way. But what I want is the average of the individual bot scores, each of which has a different number of battles.

Something like (average standard error / sqrt(num bots)) makes intuitive sense, but I have no idea if it's right. Or maybe sqrt(average(variance relative to per bot average)) / sqrt(num battles)?

This would also allow me to measure the benefits of the smart battle selection.

Voidious19:45, 13 August 2012

I don't actually think this can be correctly modelled by a unimodal distribution - you will be adding thin gaussians to fat gaussians, making horrible bumps which don't like to be approximated by a single gaussian mean+stdev. I almost wonder if some sort of Monte-Carlo solution wouldn't be most accurate in this instance - at least the math would be easy to understand.

Skilgannon22:11, 13 August 2012
 

Good call! That was super easy. I don't recall this Monte-Carlo stuff, but the name rings a bell so maybe I learned about it at some point.

So I calculate 100 random versions of the overall score. For each battle that goes into it, instead of the real score, I generate a random score, assuming a normal distribution using the mean and standard deviation I have for that bot. Then I take the standard deviation of those randomized overall scores and multiply by 1.96 for the confidence interval. Seems like a lot of calculations, but only taking a few hundredths of a second even with 250 bots/3000 battles, so I can afford to do it even when I print the overall score after every battle. Nice!

Voidious03:03, 14 August 2012

Now I'm just bummed that it's still +- 0.06 after 3000 battles. :-(

Voidious03:11, 14 August 2012
 

I'm curious - did you use the Monte-Carlo method for calculating the non-smart-battles deviations?

Also, how long did it take to get the 3000 battles compared to the non-smart-battles?

Skilgannon07:32, 14 August 2012
 

I'm using the same Monte-Carlo method for confidence either way. I hadn't run too many side by sides yet, but I'll do some more soon. Over night, I ran a test of 25 seasons of TCRM in regular vs smart battles mode on my laptop. They took about the same amount of time, and both ended up showing +- 0.363. But the smart battles came out to 89.32, very close to the 89.31 I got when I ran 100 (non-smart) seasons before, while the normal battles ended at 88.76.

So I'm a little disappointed it wasn't faster nor showed a better confidence, but it was a lot closer to the true average. And I guess my confidence calculation sucks or something weird happened, since 88.76 is much farther than .363 from the true average. (And yes, my TCRM score has tanked that much since its glory days!)

Voidious13:47, 14 August 2012
 

Are you sure that you're first averaging all the scores into each bot before averaging the scores together for the section? It wouldn't make a difference in the old method, since they all had the same number of battles, but it would affect things in the new one.

Skilgannon13:55, 14 August 2012
 

I guess the other possibility is that Diamond is so much slower than the bots it is facing that it doesn't make much difference which one you face. What was the spread of battles like on the TCRM? Were they spread fairly evenly, or were certain battles highly prioritised?

Skilgannon13:57, 14 August 2012
 

Yeah, that's a good point, especially with the TC bots that are just simple random movements and no gun. If the variation in confidence is higher than the variation in speed, it could take longer for same number of battles. I guess the puzzling thing is the overall confidence calculation showing the same both ways. With a limited amount of sample data, I guess it can only be so accurate, but I'm thinking I may have a bug there. The spread was:

  apv.AspidMovement 1.0: 95.6  +- 0.83  (16 battles)
  dummy.micro.Sparrow 2.5TC: 98.43  +- 0.64  (13 battles)
  kawigi.mini.Fhqwhgads 1.1TC: 96.95  +- 1.11  (21 battles)
  emp.Yngwie 1.0: 98.15  +- 0.77  (14 battles)
  kawigi.sbf.FloodMini 1.4TC: 94.91  +- 1.25  (24 battles)
  abc.Tron 2.01: 88.15  +- 1.42  (26 battles)
  wiki.etc.HTTC 1.0: 88.83  +- 1.45  (28 battles)
  wiki.etc.RandomMovementBot 1.0: 92.23  +- 1.04  (22 battles)
  davidalves.micro.DuelistMicro 2.0TC: 86.22  +- 1.61  (31 battles)
  gh.GrubbmGrb 1.2.4TC: 81.29  +- 1.87  (33 battles)
  pe.SandboxDT 1.91: 85.48  +- 1.8  (31 battles)
  cx.mini.Cigaret 1.31TC: 86.82  +- 1.62  (31 battles)
  kc.Fortune 1.0: 80.6  +- 1.77  (29 battles)
  simonton.micro.WeeklongObsession 1.5TC: 87.02  +- 1.48  (26 battles)
  jam.micro.RaikoMicro 1.44TC: 79.16  +- 1.8  (30 battles)

Going to leave some tests with Diamond 1.8.16 in real battles running today and see how that compares.

Voidious14:04, 14 August 2012
 

Those +-, are they the standard error or the stddev?

The only thing I can think of testing is whether you are calculating the right number of random battles for each in the Monte-Carlo method. If you were only doing one battle for each, then the numbers you are getting would be the same for the standard as for the smart battles. It looks like the prioritisation is working well though - Sparrow and Yngwie both have low number of battles as well as low error/stddev.

Skilgannon14:16, 14 August 2012
 

The per bot +- is the 95% (or 97.5%?) confidence = 1.96 * standard error = 1.96 * standard deviation / sqrt(num battles).

It probably is something silly like the one battle per bot you mentioned, but at a glance it seems like the overall confidence calculation isn't doing anything stupid. I'll have a longer look this evening. I do think the smart battles are working well, though, I'd just like to have some numbers to back me up. =)

The spread is a bit more interesting in real battles. HOT bots with 99.9% scores will get 2-3 battles in 12 seasons. RamBots get lots of battles because they have fairly high variance and run super fast.

Voidious14:46, 14 August 2012
 

Some results with normal battles. Diamond 1.8.16 vs 50 random bots for 10 seasons.

  • Dumb battles: took 6338.8s, 89.87 +- 0.188
  • Smart battles: took 6010.6s, 89.94 +- 0.148

Looks like it hit ~0.18 by 5 seasons with smart battles. Right now I'm using a much rougher calculation for printing overall confidence between battles, for speed. I will be improving this with some caching of the random samples for the overall scores. I do a much more thorough calculation for the final score.

It's a slightly different calculation with the scoring groups, so maybe I only have a bug there. Or maybe there just wasn't much difference in the TCRM. Or maybe TC scores are so far from normally distributed that it throws it off. Or maybe it was just a fluke - the same confidence down to 3 digits seems pretty unlikely even with the same battle selection.

Voidious18:38, 14 August 2012
 

Well, the verdict is in. Looks like a combination of fluke and the TCRM battles just not being particularly optimizable. I ran another 25 seasons each way and got:

  • Dumb battles: Took 2690.4s, 89.13 +- 0.362
  • Smart battles: Took 2858.8s, 89.4 +- 0.338

So this time smart battles actually took longer, but had a better confidence and were again much closer to the true average. I also tested that the groups and non-groups versions of overall confidence were giving the same for TCRM (because groups are of equal size). I'm going to skip any fancy attempts to optimize for a more accurate overall confidence between battles, round the final confidence to 2 digits instead of 3, and get this posted.

Voidious02:31, 15 August 2012
 
 

smart battles

So I'm planning to implement smart battle selection this weekend. Every bot (or bot set) will get at least two battles, then I will choose battles to run (in batches since I don't want idle threads) based on trying to decrease standard error in the least amount of time. Maybe with some random battles sprinkled in as well.

I'm thinking I will choose bots with the highest value for: <math>{{stDev \over \sqrt{numBattles}} - {stDev \over \sqrt{numBattles + 1}}} \over {avgBattleTime}</math>

I think this will lead to an overall result with the highest confidence in the least amount of time.

I like testing against a test bed with an average score about the same as my RoboRumble APS. The problem with this is it includes a lot of bots with super low variance (eg, 99.9% scores), so running lots of battles against them is a waste of time. But ignoring them and using a stronger test bed risks specializing against stronger bots.

Voidious16:52, 10 August 2012

That looks like a good metric for choosing fast stability. Now I'm wishing I'd included variance in the LiteRumble scores...

Skilgannon18:33, 10 August 2012
 

Yeah, do you just store a running tally of average score? I'll need to update RoboRunner to keep scores from every individual battle, too, along with battle times.

Voidious20:15, 10 August 2012
 

Yeah, I do a online mean calculation, so newMean = oldMean*(n/(n+1)) + newScore/(n+1), n++

I've actually thought quite a bit about this, and it all depends what score you're trying to stabilise. If you're trying to stabilise the PL, for instance, you need to run lots of battles for pairings at or near the 50/50 mark. If you're doing Schultz then lots of battles need to go to where a weak bot beat a strong bot. It's all about which battle has the most potential influence.

Skilgannon07:54, 11 August 2012

Yeah, for sure you would focus on different battles to optimize other rankings. I'm not sure I need to add a "focus on win/loss" flag to RoboRunner, since you'd probably just test against your toughest matchups if that's what you were working on. It does support smart battles for all the score types, though (eg survival, bullet damage).

If we do implement this type of smart battle selection in a rumble system, maybe we could have a client side setting for what you're interested in optimizing. =) I guess to start it would just be APS vs win/loss, but it could include Schultz or Vote at some point.

Voidious15:47, 13 August 2012
 

Got this working, just dogfooding it a bit myself before posting it since it's a pretty major change. Data files are now (gzipped) XMLs with the raw scores from every battle and everything's recalculated on the fly. (That was actually most of the work.) Comes out to about 100 kb for 3k battles.

It runs 2 seasons vs each bot then does smart battle selection with the formula above to try to increase overall accuracy as quickly as possible. It's nice to see test runs where only 2 battles were run vs HawkOnFire. =) 5% of the time, it instead chooses randomly among the bots with fewest battles, to try to mitigate cases where the variance was randomly low in the initial battles. (I can make this configurable if/when anyone cares.)

It won't schedule two battles vs the same bot unless the number of bots is <= the number of threads. Otherwise, you'd keep scheduling that bot until the battle finishes. I could instead estimate how many times in a row it would still be worth scheduling it, but that seems like a lot of work for a corner case.

I think this is going to save a heck of a lot of CPU time. The XML data files will also make it easier to let you store arbitrary score data in the custom scoring stuff.

Voidious21:50, 12 August 2012

Though I'm still figuring out how to avoid potentially corrupting the data file if you ctrl-C your run. I'm not sure if skipping the gzipping would help or if it's just become more likely because the data files are so much larger. Maybe I need to add a keyboard option to safely exit.

Voidious21:54, 12 August 2012
 

Just a quick node. Maybe you know that already but you can add a shutdownhook to the runtime thread. This would catch CTRL-C and you can clearly shutdown the gzipping. Not sure if that is what you looking for.

Wompi22:08, 12 August 2012
 

Cool, yeah, that might do the trick. I'm trying just doing a fresh save of the score data in the shutdown hook and I'll see if I can ever replicate the problem.

Voidious22:20, 12 August 2012
 

Still getting the feel for how many seasons to run with smart battles. It indeed seems to be much more accurate in less time, but I'm not sure to what degree I should:

  • Run less seasons because it's more accurate per number of battles.
  • Run the same number of seasons, since it will run faster and still be more accurate.
  • Run more seasons, in about the same amount of time as before, but with much more accuracy.

I guess it partly depends on how patient you were being before this feature. =)

Edit: Part of the dilemma is that this focuses on accuracy per time, not per number of battles. So maybe with a certain test bed, you don't gain accuracy in 10 seasons vs traditional battle selection, but it completes in 25% less time and gives the same accuracy. So you could up it to 12 seasons to do better on both time and accuracy.

Voidious16:10, 13 August 2012
 

priorities

At this point, this tool does everything I need and I'm really happy with it, so if anyone wants to offer feedback as far as features or prioritizing the to-do's, let me know. =) I'll probably bang out some of the more important stuff in the next week or so (like letting you configure JVM arguments), and the option for dynamically loading battle listeners for custom scoring sounds really cool, so I might tinker with that soon too.

Voidious00:57, 30 July 2012

Hi mate. I got a little intimate with your code and finally figured out how it works :). I wrote a dynamic class loader that can load classes from a specific directory/jar. The classes will only be loaded if they provide a certain interface. So far so good. After this i was digging through the code and was looking for a good point to use these classes. Unfortunately it looks like, there is no good way to pass classes between the 'BattleProcess' and the 'BattleRunner'. I tried to redirect the 'System.out/in' of the BattleProcess to Serialization streams but this is not working as i now know. I guess object serialization over temp files is nothing that you are fond of, neither to speak of RMI. The other idea that came to me, would it be possible to map the events of the BattleProcess BattleListener (on...()) to strings, then pass it over the in/out stream to the BattleRunner and rebuild the events there. In my opinion this would have the advantage that you can pass the events to the user made score class and would have no need to do all the score parsing within your code. If the user class decide it has no need for the event it will simply be ignored.

Hmm i have right now a hard time to explain this :). Lets give you a scenario.

I write a score class for the PatternChallenge. The score class interface has a getName() method and this name has to be in the 'pattern.rrc' to. The class will be loaded RoboRunner reads the "rrc" file looks for the available score classes and find my PatternChallenge class. Now you can register this class on the BattleRunner (similar to the BattleResultHandler you have). The score interface has, lets say onBattleCompleted(..) implemented and you pass all the events (in this case just one) to my score class. There i can read the damage fields and calculate my score and if i want to print the results to the console i can do this as well (no work for you so far :)). If the score interface provides a toString() method i could use this to provide a output string for the data file. The only thing you had to do, would be to get this string and write it to the data file at the end of everything. I'm sure i missed something but so far as i see it, could you get rid of all the hard coded score you have right now.

Well, i hope it makes at least a little bit of sense what i have said. If you think i'm wrong on one/all points let me know, i'm not offended at all by it.

Anyway enough mumbling for today :)

Take Care

Wompi19:38, 30 July 2012
 

Cool! Well, I have a few thoughts on how all this could tie together:

  • Instead of (or in addition to) RoboRunner/BattleRunner dynamically loading the listener/scoring class, I think we should pass a flag to BattleProcess that tells it the name of the listeners to load and attach to Robocode engine.
  • I think it would be good if the listener interface extends IBattleListener, or includes one, so you can just attach it to the RobocodeEngine (addBattleListener) and have it listen to the events it wants.
  • Then I guess it would need some setup to pass its output back to BattleRunner so we can store it and print it. I'm fine with printing to stdout or writing to temp files or whatever. I guess if we load the interface on the RoboRunner side, too, it could also have a method that runs after each battle to print whatever it wants from the data file.
  • I don't think it's reasonable for BattleProcess to always listen to all events and pass all that data back for every battle. If you look at IBattleListener, it's possible to listen to every detail about every single turn in the battle. That's a lot of extra processing if you're not using it. =)

Does most of that make sense? Thanks for getting the ball rolling on this! I think it'd be a really exciting feature. Even if nobody but us uses it. =)

Voidious19:58, 30 July 2012
 

Hmm ...

  • Is there another way then dynamic loading a class, if the program does not know about it? Maybe including the score class directory in the class path and making the challenge name the fully package name but this would still need the class loader part.
  • I was starting with the interface to be IBattleListener but i could not get the event classes within BattleRunner and therefore i mapped it to the same methods but with different parameter objects.
  • This sounds interesting. I was playing with this but had to face some issues that i could not solve. Loading the same class in different environments but not using all methods equally would be very inconsistent (not to say bad style :)) i guess. The user is probably not aware that the class has no idea where the events are processed and would put his output stuff just within the on..() methods - but never got a result, because it works in a different environment. And making two different classes (one for BattleRunner one for BattleProcess) would be not very user friendly and increases the probability to doing something wrong.

If you have no problem with temp files i guess this would be a good way to solve the issues. This way you can load the score class (should be extend BattleAdaptor) and RoboRunner can check if a certain method is overloaded (translates to - is he interested in this information). This information could be flagged to the BattleProcess and he can use it to process the needed events. If you use temp files you have the possibility to serialize almost every event to the file - pass the temp file name to BattleRunner, restore the Events and pass them to the score class. I cannot point my finger on it, but something tells me that there is something wrong with this approach :)

  • yep you are right :) - i was not fully aware of the cascading level of the onTurn..() events and this could lead to some issues with the temp files to i guess. If you, lets say, just interested in the energy level of all bots, it would certainly not make sense to save the whole turn event cascade. Maybe you have a idea to overcome this.

Hehe, thats quite a point you got there :). But i hope it will pay off somehow, especially if i look at the time i have spend to write output classes to get some data visualized through GnuPlot. I easily can see some nice GUI statistic diagrams or movement plots for later runs and that really excites me :).

Take Care

Edit: Another incredible easy to use IPC would be to use named pipes. But this would put the Windows user out of business until someone is willing to write a JNI adaptor, or find another way to establish a named pipe there.

Wompi08:53, 31 July 2012
 

So I guess there's two major things being weighed here:

  • User code running in 1 vs 2 places - Having the user code running on just the RoboRunner side of things may avoid some programming pitfalls if someone tries to store state between the battle listener and the score output.
  • Having to flatten the battle events for post-processing - If the user code is not in the BattleProcess, we need to figure out what events to listen to, log them, and pass them back to the other side for post-processing after the battle.

I guess I have a pretty strong preference for having user code in the battle listener itself instead of processing and transferring all the desired events. Figuring out which methods to listen to, serializing all the events, then processing them on the other side just seems like a lot of unnecessary work, and possibly error prone. A big note in the Javadoc that the listener methods should be idempotent, or using separate interfaces both seem like OK options to me.

I get the impression you'd rather make the other trade-off. =) The main thing I'm not sure of is whether reflection can figure out which methods you actually override. All of them would be overridden by BattleAdaptor, so I'm just not sure we can tell the difference. You could end up with some big temp files if you listen to onTurnEnded, but I don't think processing time would be much compared to running the battle itself.

So I guess what I'm imagining is something like:

  • RoboRunner finds the custom listeners (command line argument and/or in challenge file). It loads an instance to process scoring output and passes the listener names to BattleProcess, which also loads them.
  • BattleProcess sets some object on the listening class, which the listener can use to store custom values. (Eg, "skipped_turns" = 50, or "score_snapshots = {100, 150, 250, 575}".) Maybe an XML or JSON object.
  • BattleProcess loads the battle listener and attaches it to the Robocode Engine, and runs the battle. The listener processes things on the fly and stores data in the data object.
  • The values stored by the listener would be output by BattleProcess, read by BattleRunner, and stored in the bot's data file. (With XML or JSON, converting to/from ASCII like this would be pretty easy.)
  • The scoring method would take the score data for that bot set and/or battle and display whatever it wants.
Voidious16:30, 31 July 2012
 

If you're using some sort of IPC, why not TCP? Then it opens the option of running remote battle runners.

Skilgannon19:16, 31 July 2012

This could actually work really smoothly. By default, it spins up the processes as now, but passing a port number to each process and communicating over TCP/IP. The data sent / received could remain the same. Then we could add command line arguments for:

  1. Launching Robocode processes and doing nothing, just listening for commands.
  2. Accepting a list of host:port of additional processes. In addition to the normal processes, launch a thread for each remote process.

So on your extra machine, you do #1, and on your primary machine you do #2, and voila!

Edit: Except for copying the necessary bot JARs. That would be a little more complicated.

Voidious19:45, 31 July 2012
 

Well :), of course TCP would be the obvious choice for IPC, but i think you bring a whole new bunch of complexity into the program and i'm not sure if it is worth the struggle.

Beside copying the bot JARs, copying the user score classes, configuring the robocode path on every extra machine there are some other more technically issues to consider. Of course if done right it would be a very nice and strong feature, beyond question.

Using the scenario you described, with JSON, sound quite interesting, maybe i should reconsider my concerns about having the user classes running on two different places. I'm sure i'm nitpicking to much on that point.

Sidenode: It is possible with reflection to check if a method is overloaded just by doing

myBattleAdaptor.getClass().getMethod("onBattleCompleted",BattleCompletedEvent.class).getDeclaringClass()

if it gives back the name of myBattleAdaptor it is overloaded.

Right now i have discarded the tmp file approach, simple because i don't liked it and switched to named pipes. The BattleRunner got some watcher threads where he is communicating with the BattleProcesses, using ObjectStreams and watch out for errors and feed the score class. Don't worry i'm doing all this just for curiosity and will be fine with whatever you come up.

Take Care

Wompi09:24, 1 August 2012
 
 

Congratulations on releasing this!

I've got it working already. Super easy.

It is moving along noticably quicker than RoboResearch! Awesome work!

Tkiesel05:17, 26 July 2012

Cool, so nice to hear! =) I think some people will miss the RoboResearch GUI, but maybe I or someone else can add one sometime. And there's still quite a few little things left on the to do list. But I'm pretty happy with it. =)

Voidious05:28, 26 July 2012
 

Hi Voidious. Nice program have you put there together, respect. I'm really excited about the melee feature. I had a couple of tries with RoboResearch but got it never to work on melee benchmarks. Easy to install and use, nice job.

Have you thought about some sort of dynamic score output? For me it would be very useful if i could write my own benchmark score, because in melee it is sometimes better to get a score view along some certain battle states. Like start/middle/end game or score against every opponent by its own. If you have for example Diamond :) and some samples together i would like to know how much score i loose to the samples (or in general weaker bots) if a top bot is on the field.

I will have a look at the sources, and maybe it is possible to make the scores dynamic. Maybe you have something in mind and we could share some ideas. I'm very fond of the idea to have a nice and easy melee test platform.

The remote client feature of Jdev Distributed_Robocode would be awesome. Unfortunately it seems to need Java 7 and therefore is out of my reach.

Take Care

Wompi09:11, 28 July 2012
 

Hey Wompi, thanks for the thoughts. The score output could definitely use a lot more features/options, it's pretty bare bones right now. You also make me realize that I don't even score per bot scores in the data file, so I'll need to fix that first. One thing is, as much as possible, I want to make the right decision automatically about how to show scores instead of making you remember lots of settings, but in cases where different things make sense to different people I'm OK with adding optional flags or whatever.

So for Melee, right now we have something like:

  voidious.Diamond 1.8.4.x12 vs abc.Shadow 3.84i, sample.Crazy 1.0: 61.02, positive.Portia 1.26e took 34.8s, avg: 59.93.
Overall score: 55.34, 1.5 seasons

So maybe if there's more than one opponent, we'd add the per bot scores each on their own line after that? Like:

  voidious.Diamond 1.8.4.x12 vs abc.Shadow 3.84i, sample.Crazy 1.0: 61.02, positive.Portia 1.26e took 34.8s, avg: 59.93.
    vs abc.Shadow 3.84i: 55.05 (22000 : 19000), avg: 53.70
    vs sample.Crazy 1.0: 90.1 (22000 : 2000), avg: 90.2
    vs positive.Portia 1.26e: 53.43 (22000 : 20341), avg: 54.15
Overall score: 55.34, 1.5 seasons

What do you think? Would you also like to see bullet damage / survival data? I always collect all the different fields for scoring, but for the most part was only going to show whatever you had configured as the scoring style. But I've been thinking lately it might be nice to show bullet damage / survival too.

Voidious14:28, 28 July 2012
 

Oh, and about the options for mid-battle score data, that sounds like a really cool idea. Do you mean like you could write and plugin your own scoring code? I'm not really sure the best way to set it up so you could write your scoring class and pass it to RoboRunner, but from a technical standpoint I don't think it would be too tough.

I was just thinking yesterday that it would be cool to integrate some stuff like what Rednaxela did here for collecting hit rates and stuff during a battle, too.

Voidious14:34, 28 July 2012
 

Yes, each per line would be great.

For the damage/survival data, hmm, personally i look at the damage in very rare cases (mostly if i run my 100+k/40k benchmark against the samples) and survival is most interesting if you see all places (to spot some movement leaks in early/mid game) but it couldn't hurt to show these data :)

As i said, i have quite a bunch of 'odd' scoring pattern and a way to implement these dynamic would be great. My fist thought was to provide an dynamic ClassLoader and a directory, where you can put your own written score pattern. There you can release RoboRunner with some default pattern (score,damage) and still provide the possibility to write your own. I guess this would be fairly easy just to provide an interface and pass the 'ScoreObject'. In that way you don't have to put much interest in the scoring table. Some challenges also need some unusual score i guess.

To use the 'robocode.control' (like Rednaxela did) would be extraordinaire, i constantly write new output classes and pass the results to GnuPlot but having this bundled - awesome!

If you like, i can try to put a first draft together for the scoring tomorrow. Not sure if you are fond with the idea to have someone messing with your code.

Take Care

Wompi16:27, 28 July 2012
 

Well, for the next round of changes, I think I'll add the per bot data for Melee battles and the other basic scoring options (like survival and bullet damage). There's still a bunch of basic things I need to check off my to-do list before I get too deep into the custom scoring stuff. But I do think it sounds awesome and really powerful.

Reading a custom battle listener at runtime and attaching it to the Robocode engine via control API (which I'm already using to run battles) should be pretty easy. Then you could listen to whatever events you want to and do whatever you want with the data. And if you're comfortable doing everything outside the RoboRunner infrastructure, that will be all you need.

What I see as the hard part is crafting a nice way for you to store/retrieve these score data in the data files and format custom output, which I think is necessary to make this a lot more useful. It's not rocket surgery, but I think the data file format would need an overhaul - maybe switch to something XML based. And I'm not sure about just passing a custom ScoreObject, because, for instance, right now I'm only listening for the final score from the Robocode engine, so I don't even have the data you'd want. You'd need to listen to other stuff for the per-round survival and stuff. There's a lot of options for what type of data to collect, so I don't think I want to guess and try to record all types of data you might want and just pass it along.

And sure, feel free to experiment some of this stuff, or fork the GitHub repo and go nuts. =) I made the code public domain so people can do whatever they want with it. I'm super stoked that anybody's even interested in this - I thought I'd be the only one using it while everyone else just stuck with their RoboResearch setups. =)

Voidious17:21, 28 July 2012