View source for Talk:RoboJogger
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Version 0.9.5 Bugs | 1 | 22:16, 29 January 2013 |
Version 0.9.3 Bugs | 0 | 05:07, 31 December 2012 |
How Scoring Works | 1 | 16:59, 25 December 2012 |
Version 0.9.2 Bugs | 9 | 19:29, 24 December 2012 |
Version 0.9.1 Bugs | 6 | 23:27, 17 December 2012 |
Version 0.9 Bugs | 5 | 04:05, 16 December 2012 |
How Should RoboJogger Be Packaged? | 5 | 17:31, 15 December 2012 |
first release | 2 | 17:09, 15 December 2012 |
Results dialog | 4 | 19:37, 14 December 2012 |
Interrupting RoboRunner | 4 | 06:43, 13 December 2012 |
Calculating Confidence | 7 | 04:49, 13 December 2012 |
Problem Running RoboRunner | 6 | 22:16, 5 December 2012 |
First page |
Previous page |
Next page |
Last page |
Naturally, I found 2 bugs within an hour of releasing 0.9.5. Both will be fixed in the next version. First, I accidentally left a debug message in that gets written to the RoboRunner output window after every battle. That's already gone. Second, I discovered that running the Remove All function causes exceptions to start happening when trying to run RoboRunner after a Remove All. I still have to look into this, but it will get fixed in the next version. In the meantime, should this happen to anyone, the solution is just to rerun the Setup function.
Another bug that is still around, just FYI, is on rare occasions some data can be lost from one of the score logs. I am still not sure under what scenario this happens, but it did happen to me again recently. It didn't totally corrupt the score log, but it did lose some of the battles, and those battles had to be re-run for some of my challenges.
Another potential issue -- not really a bug -- is that RoboJogger can be slow to start up if you have a large number of challenge runs, because it recomputes completion information for every challenge run on startup, which can mean reading a lot of score logs. I was thinking for the next version I would store completion information separately (basically store it with the challenge runs instead of recomputing from the score logs) to make initial start up quicker.
Big bug due to a fat finger mistake in version 0.9.3. PERCENT_SCORE challenges will give you an Unsupported Challenge error when you try to add them. I'll get this fixed soon. It was due to a typo on my part, along with inadequate testing. To get around this bug, you can change your challenge file to be "PRECENT_SCORE" (note the spelling error) and then it will run. Or just wait for me to put out the next version, which could be tonight or several days from now depending on how busy the new baby in our family keeps me. Sorry for the error.
Okay folks. Help me out here. I didn't see any page on the wiki that details how all the challenge scoring types work. I'm basically just guessing on everything but normal scoring and bullet damage scoring. What I'm currently doing is best shown by just posting the class that currently handles scoring, and you all can let me know what needs to be changed. Thanks!
//TODO: Verify how each scoring function is supposed to work public class ScoreFunctions { public static ScoreFunction PERCENT_SCORE = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.score / (challenger.score + opponent.score); } }; public static ScoreFunction SURVIVAL_FIRSTS = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.survivalRounds / (double)numRounds; } }; public static ScoreFunction SURVIVAL_SCORE = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.survivalScore / (challenger.survivalScore + opponent.survivalScore); } }; public static ScoreFunction BULLET_DAMAGE = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.bulletDamage / (double)numRounds; } }; public static ScoreFunction MOVEMENT_CHALLENGE = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.energyConserved / (double)numRounds; } }; }
MOVEMENT_CHALLENGE is generally "100 - (bullet damage taken / total rounds)", or "return 100 - (opponent.bulletDamage / (double)numRounds)". Though if you can get the Energy Conserved, that might be approximately the same.
BULLET_DAMAGE is AVERAGE_BULLET_DAMAGE.
Otherwise I think it looks correct.
Also keep in mind that RoboRunner supports melee battles last I checked, so a single RobotScore opponent may not be sufficient unless you add up all the opponents data into that one entry. Even then, I am not sure if the math works out correctly, especially with my definition MOVEMENT_CHALLENGE.
I try and do a tcas/tcrm challenge, but I get a Challenge not supported. Unsupported scoring type: AVERAGE_BULLET_DAMAGE.
Doesn't robojogger support this scoring method? Or does it have a different challenge file syntax? If so, what is it?
Unfortunately, I need to go back to roboresearch.
EDIT: After some research, I found it was "BULLET_DAMAGE", please make it support "AVERAGE_BULLET_DAMAGE" as well, if only as an alias. So that we can just copy and paste roboresearch challenge files. :)
It seems to fill in the results with 100, despite the actual scoring in the roborunner output being something else. The results in the results window do not update during the running (not even the erroneous scores). The confidence scores were always 0.0, just as the scoring was always 100.0.
Even after a full three seasons, the scores did not correct themselves. The correct results are in RoboRunner output of course.
Hehe, while I wish I could believe I made a gun which produced such scores, I didn't.
The scores not updating during running is normal. I'm waiting for the next release of RoboRunner before I implement that. As for the messed up scores, I would guess this has something to do with an error in what I'm doing with the scoring type. All of my testing so far has been with PERCENT_SCORE. I'll start testing with other scoring types and fix whatever problems I find.
Okay, so I looked into what is happening with the scores under the BULLET_DAMAGE scoring type. What RoboJogger is doing is taking the score for the scoring type for the challenger and dividing it by the sum of the score for the scoring type for both the challenger and opponent. I think this seems right, but I don't use scoring modes other than PERCENT_SCORE very often, so I'm not entirely sure.
Why this comes up with a bad number is because when the scores are loaded from the ScoreLog (ScoreLog is part of RoboRunner), the bullet damage score (which the BULLET_DAMAGE score type relies on) is always 0 for the opponent. This might be a bug in RoboRunner -- either the scores not getting saved correctly or loaded correctly by the ScoreLog. I also noticed that the energy conserved values were also 0 for both the challenger and opponent, so may be a related bug there too.
TC scoring is bullet_damage / number_of_rounds. Which produces an output between 0 and 100 (not 0.0 and 1.0).
The your_score / (your_score + enemy_score) is a percent index. It can't be used with bullet damage where one of the robots does not fire (which is what happens in a TC). The reference robot (the one moving) will never have any bullet damage.
I guess I assumed all scores were just percent scores based on different scoring metrics. I need to figure out where I can find information on exactly how each challenge type is scored. I'm not following your explanation on TC scoring, as I don't see how a good robot that doesn't run itself out of energy with misses would ever score anything other than 100. If the opponent doesn't fire (or hit walls), the challenger would score 100 on every round.
Okay, reading a targeting challenge page more closely, given that the challenger is only supposed to fire power 3 bullets, I guess the intent is that on some rounds the challenger will run out of energy such that scores will vary. While I like the idea of not having the opponent firing back as an extra variable, having to alter your challenger's gun to only fire power 3 bullets means this is not a full test of the challenger's gun; it's just a test of the challenger's gun's aim with power 3 bullets, leaving out distance and power controls.
I also have to wonder why the movement and targeting challenges are not just inverses of each other.
Well, that could happen, except you are only allowed to pass 3.0 as the fire power. Which means that no current robot is able to get 100 bullet damage all of the time.
Well, an enemy who doesn't fire has no chance at regaining any energy. So you can only at the absolute most do 100 damage to them in a single round.
Of course if the enemy damages itself by hitting the wall or your robot. Then your score will not be 100, since you did not do 100 damage, even if you kill it. This also happens if you don't win the turn. If your robot disables itself from firing, then only the damage you did that round gets counted.
As for damage. Well you do more damage then power you put into a bullet. The algorithm is this. Taken from Rules.java
double damage = 4 * bulletPower;
if (bulletPower > 1) {
damage += 2 * (bulletPower - 1);
}
So you can get 100% without hitting every shot. You do 16 damage for every 3 power bullet you fire. For every 1 power bullet you fire (say after you shot and missed 33 times), is 4. So you only have to hit 7 times to kill the enemy. Though it is often not that simple.
Post any bugs you find in 0.9.1 here. Note: If you have 0.9 and want to keep 0.9 data and challenges, just make sure you save and move robojogger.dat and all the files in the data directory. You will probably want to move all the robots from the bots directory too. Note that on first install, there is no bots nor data directory; you can add them manually or just start and stop RoboJogger once to let RoboJogger create them.
I had one strange thing happen so far. Once, when I was starting RoboRunner, the CPU hung at 50% and the battles never started. I stopped then restarted, and it ran fine after that. So be on the lookout for that. I'll be exploring to try to find out what might have caused it.
Had issues with locking up when RoboRunner is started again. I'll be running a lot more small challenges in development to track down the problem and fix it. Not sure what is going on, but it only seems to happen when RoboRunner starts a new challenge.
I had to update my java 1.6_11 to the newest 1.6_37 version to get RoboJogger running. I just mention it, if someone else has trouble to start the jar.
I haven't done anything so far, but i plan to give it a look over the next days.
Hm... that's on Mac, yes? I bet the Apple Java Extensions changed somewhere between 1.6_11 and 1.6_37. I'll at least see if I can figure out exactly what version of Java this changed with on the Mac so I can put it in the notes.
In other news, I found and fixed another bug, which will be fixed in the next release. RoboJogger can fail to load a Robot properly if the robot jar file contains multiple .properties files. Already fixed in my source.
FYI -- I'm adding a new little feature to the next release. For each challenge run, you will be able to add a note/description of what the challenge run is for. This is something I will definitely use and I hope some of you might find it useful as well. It will be an extra column on the main window, but I will probably add a way to show/hide various columns.
I did some more testing since I threw up links to version 0.9. I noticed that if you let RoboRunner process all challenges, once it is complete, the buttons and menu item to start RoboRunner and do things like Setup do not re-enable. This was an issue with the way I was doing locking, and I've already fixed it in my source. I'll put up a version 0.9.1 in a few days where that will be fixed.
I also noticed that if you have a challenge where the robots have nicknames (like MC2K7), error messages show up in the log for the nicknames. This doesn't affect usability any, but it shouldn't happen, so I'll get that fixed too.
Another minor issue I noticed is that after multiple seasons, the totals RoboRunner shows and the totals RoboJogger show can be a smidge off. I would guess this is due to some rounding error somewhere. I'll investigate it.
For the look and feel for Windows, I chose Nimbus, because I think it's pretty cool. But I think Chase is right, I should default to the system look and feel. I will probably change that; however, I will probably also add a dialog for changing the look and feel as a preference (I already have a class available that does that, so it's trivial to add).
Totals not showing up in the middle of the first season is normal. A total isn't really valid (imo) until at least one battle has been run against each robot in the group or challenge. However, if totals don't show up after a full season completes, that is a problem. And not a problem I have witnessed. I'll keep an eye on this, but so far I haven't seen this problem. If you continue to see this problem and I don't, you may need to post or send a copy of your challenge file for me to test with.
Chase had another good point about removing unnecessary stuff from the robocode_template directory. It is highly unlikely someone will download this and need or want a full copy of Robocode with it. I'll trim it down to the bare minimum.
Finally, I know it's a little unnerving not having some kind of progress indication when RoboRunner is running. Note that in the Tools menu there is an option to Show RoboRunner Output. This will show what RoboRunner would normally output to the console (but with each line timestamped), though it is limited to I think the last 300 printed lines. It is the only way to see on-the-fly results right now and have a good indication of progress.
Please post any other bugs you come across. Thanks!
My original comment was eaten by the reply box closing when I scrolled to hit save.
So in short, I did notice the output, and did use it and that is how I know that there were no results, mid season or after it finished. One robot challenge with percent scoring.
As for progress you could just parse the output of roborunner and redisplay it in the UI in some way. But this may require another thread.
I thought about parsing the RoboRunner output, but that is not a very clean way of interfacing with RoboRunner, and Voidious already seemed willing to provide an update in the future that will provide a more robust interface for getting on-the-fly results. So I am waiting on that (@Voidious -- let me know if you want me to help on this; I have time now that I'm mostly done with RoboJogger 0.9). In the meantime, if you close all result windows for a challenge and reopen the results, you should see full results. RoboJogger reloads everything using the RoboRunner ScoreLog when the results window is opened. I'll do some more testing to see if I can cause a scenario where results are missing.
Found another problem. After "stopping" RoboRunner, the RoboRunner threads appear to continue to use CPU. Not sure why yet. But it definitely needs to be looked into.
Solved the last bug already. I didn't realize that when shutdown() is called on the thread pool, it will still finish executing any queued tasks. I just had to make a minor update to my modified version of RoboRunner to also cancel all remaining Future's after being interrupted.
A question for anyone who cares to chime in. Tonight I created most of a build script for RoboJogger. In the past I have used tools like Launch4J and IzPack to make executables and installers for Java applications for Windows. I could do this for RoboJogger, if anyone prefers. In addition to just making the source available, would you prefer: 1) A zipped archive where the main class is in a jar (unzip and run with javaw -jar robojogger.jar), or 2) A zipped archive where the main class is in an exe (unzip and run robojogger.exe), or 3) An installer that is a jar (run installer with javaw -jar robojogger-installer.jar), or 4) An installer that is an exe (just run robojogger-installer.exe). For 3) and 4), you could also indicate whether you think the main class should be a jar or exe, if that matters to you. Or I could provide it several different ways. So if you care one way or another, let me know.
Hi Mate. I'm on a mac here and i would prefer a jar in all cases. It also has to be max java 1.6 to be usable for me.
I can set up a Mac build as well. I've done that before. It will even be somewhat Macish, if you will, as I try to follow the Mac application styling guide for Macs by using the Mac menu bar and doing things like reversing the ok/cancel buttons on dialogs (I have support for that kind of stuff built into my code). For a Mac version, I can either have a zip filled with jars, or I can also create a .dmg file if preferred.
Do whatever the easiest is for you. I'm fine with .jar or .dmg. If i'm going to use it, i will probable make an .app out of it anyway. Not sure what you mean with 'somewhat Macish' :) - do you mean you have programmed it this way or just using the -Xdock flags?
In you are asking specifics, being somewhat Mac-ish to me means setting setting property "apple.laf.useScreenMenuBar" to "true" to use the screen menu bar instead of a menu bar in the Java app, setting system property "com.apple.mrj.application.apple.menu.about.name" to set the application name, using the "com.apple.eawt" classes for setting up Exit, About, and Preferences menu items, and for ok/cancel style dialogs, making the ok button appear to the right of cancel button rather than the other way around.
If you are feeling particularly ambitious, you could use launch4j to make a windows exe to launch it (or wrap it). It doesn't change anything for me (I can run it by double clicking the jar). But others might find it useful.
I think I recall a recent version of launch4j also supporting making MacOSX executables too.
Congrats on the first release! :-) I'll be sure to test it out soon on my systems and let you know how it goes.
I think it makes sense for you to just include RoboRunner in your downloads like this. It makes the setup so much easier, and savvy users could still drop in the latest RoboRunner JAR if they want (after next version). I'll try to incorporate your interrupt changes and the new results listener soon, sorry to drag my feet on that.
No problem. Instead of trying to handle InterruptedExceptions, you might just consider adding a volatile cancel flag that gets checked before you run each battle. That way, another thread can set the cancel flag to true when it wants RoboRunner to stop, and RoboRunner can shutdown more cleanly the next time it checks the cancel flag.
Also, to avoid possible contention over a score log, you might add some way to lock a score log (maybe add a ReentrantLock to control it that both RoboRunner and external threads can access). That way RoboJogger (or anything anyone else might write) can lock a score log when it reads it, unlock it when it's done, without worrying about stepping on RoboRunner trying to write to the score log at the same time.
Just some thoughts I had....
Okay, first of all. Good work! Unlike roborunner by itself. Robojogger actually seems to work. On the other hand, the results don't seem to work. Mid-season or end of running. It just never shows up as completed. Not sure what the problem is here.
It doesn't feel considerably faster then RoboResearch. But I haven't tested them head to head or anything. It might be because RoboResearch shows the progress on the UI itself (I understand how this might not be possible in RoboJogger, at least on a per turn basis).
Other Notes:
I notice it uses a different look and feel. Usually people expect programs to use their system look at feel. You can achieve this in java by using UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); This just goes to making the program feel more 'comfortable' to people who use it.
If you are releasing robocode with it, you could 'cut down' the robocode version included to reduce the size of the zip. It doesn't need a compiler, javadoc, rumble, templates, sample bots, etc. You also seem to have multiple copies of robocode in there as well. One in template, one in robocode_jars.
On the other hand, you may want to include a few default challenge files. Like say the ones RoboResearch has. This will help if someone doesn't have RoboResearch already, and/or doesn't know how to create a challenge file.
One small comment about the results dialog. I may be abnormal, but I frequently have huge test beds, like 250 bots, or 60 different sets of 9-bot melee battles. Obviously there's no nice way to display 500 columns of scores, but just making sure not to do something ridiculous (like a 5000px wide window, which I think RoboResearch's UI does) would be nice. :-)
It will be in a scroll pane in a window that has a max size limit on it. Beyond that, do you think there is a better way to show results when there is a huge number of bots?
Not really, that seems good. At that point you're probably just interested in overall score. But on that note, if "Total" was in some fixed place instead of requiring me to scroll way to the right, that would be nice. :-)
Noted. Check out the updated screenshot I posted. I will probably have the preferred size set to something like 800 pixels wide for the center scroll pane. In the screenshot it is set to a somewhat small 400 pixels wide just to make testing easier.
Something else I'm working on is providing a way to interrupt RoboRunner in the middle of a challenge. I'm not sure in what ways that could potentially mess up RoboRunner yet, but I did have to make a couple of changes to make this work:
First, in order to stop RoboRunner completely (and not just the current battle), I had to make an InterruptedException result in the bypass of all queued battles:
In BattleRunner:
private void getAllFutures(List<Future<String>> futures) { for (Future<String> future : futures) { try { future.get(); } catch (InterruptedException e) { e.printStackTrace(); return; } catch (ExecutionException e) { e.printStackTrace(); } } }
There might be some additional modification, but for now, I just added a return statement if an InterruptedException occurs (will probably also get rid of the printStackTrace call). This prevents calling get() on all remaining Futures. While I think this was an unexpected condition in RoboRunner, in RoboJogger an InterruptedException is now an expected result whenever a stop command is issued for RoboRunner. A remaining question is, what, if anything, will be broken as a result of this?
Another change I made to ScoreLog, such that trying to access battle results for a "botList" that does not exist will not cause a NullPointerException:
In ScoreLog:
public List<BattleScore> getBattleScores(String botList) { List<BattleScore> scores = _scores.get(botList); return (scores == null)? null : ImmutableList.copyOf(_scores.get(botList)); }
I provided the null check on scores. I was kind of surprised that ImmutableList didn't do that by design. The most likely scenario where this happens is related to my other change -- if a challenge is interrupted before battles have been run against all opponents, when I later access results from the ScoreLog, I am not aware of missing results until the getBattleScores method returns null. I suppose I could have also just added a try/catch in my own code for NullPointerException without having to change RoboRunner, but I felt doing so was not the better way of handling it.
These changes are not finalized. I'm just writing about them for the sake of discussion.
I just noticed that for getting battle results, there is a method hasBotList(String) method that I could call before trying to get battle scores. This would prevent the NPE without modifying RoboRunner. Given this, I could see arguing either way about whether getBattleScores should throw NPE or return null for a botList that does not exist.
I don't have a strong opinion about NPE vs returning null - I think the hasBotList is what made me feel ok with leaving the other one NPE-ing, but that doesn't mean it has to. Seems silly to insist you call 2 methods instead of 1.
I'll have to think about the interruption stuff. Are you using the same RoboRunner instance after interrupting and trying to use it again? Certainly that would give me pause and I'd want to look over RoboRunner and BattleRunner to see what internal state might be confused by this. If not, my only worry would be if the interruption came during a file write to the score log. Maybe in the file writing, we need to catch InterruptedException, close the file stream in the catch, and rethrow? I'm not really sure. Maybe Java is already smart enough not to corrupt a file stream when being interrupted? The code you have here makes sense and doesn't raise any red flags besides that.
No -- I create a new RoboRunner instance for each challenge started. If RoboRunner is interrupted in the middle of a challenge, when RoboRunner is restarted, a new RoboRunner instance is created. Good point on the potential for RoboRunner to be writing to the score log when interrupted; I do need to take a closer look at that.
Instead of dealing with interrupted exceptions, robo runner could just provide a cancel flag that gets checked before each get().
@Voidious -- I'm not sure what your plan for confidence is, but I eagerly went ahead and developed my own confidence calculator. I was looking over your code for calculating confidence and was having trouble following it, so I instead went to my wife's Principles of Biostatistics book and read the chapter on Confidence Intervals. For the sake of simplicity, I will stick with 95% confidence intervals, as that is what you used in your code (that's where the 1.96 comes from) and it seems reasonable. The confidence interval for a single robot turns out to be pretty simple to calculate (in special-character-challenged terms, it is x +- 1.96 * s / sqrt(n), where x is the mean, s is the standard deviation, and n is the sample size). Where it gets more complicated is in calculating the confidence interval for groups and the overall total score.
Lets talk groups first. What I did for a group was to take the first score for each opponent, average them all, and that becomes data point 1. Then take the second score for each opponent, average them, and that becomes data point 2. I determine how many data points to use by calculating the average number of battles for an opponent in the group, rounded. This means some data points for opponents with more scores end up getting thrown away, and some data points for opponents with fewer scores don't have enough scores. For the latter, I use as many extra randomly generated scores as I need where the random score falls within the confidence interval of scores for that particular robot. Once I have all of the data points, I then use the original means for calculating a confidence interval on the collected data points.
Now for the overall total. If there is only 1 group (or no groups, depending on how you look at it), then there is nothing more to do -- use the values calculated for the 1 group. But if there are multiple groups, then what? We should probably respect that the overall total is an average of the group totals. This would end up being just like calculating the group confidence intervals, only treating the groups like the robots.
Did that make sense? How is this different from what what you have done in RoboRunner?
Heh, well, what I did is a little complicated, but I think it's about the best you can do for a set of bots that each have their own distributions. Basically I run 1,000 or whatever random simulations of the overall score, based on the averages / standard deviations of each individual bot's score distribution. Then I can take those "overall score" samples, supposedly generated from the same distribution as the real scores, and use them as additional samples to calculate the confidence interval of the overall score. It's a fairly basic Monte Carlo method.
I see there was a discussion about it on the RoboRunner page. I should probably go read that. Never heard of the Monte Carlo method, so I'll look into it.
I'd heard the term, but it was totally Skilgannon that knew enough to suggest it. Once I looked into it, though, it was pretty simple.
But I also wanted to mention, I was planning to pass some object with all the confidence interval info you might need about the current battle in the new listener. I figured that was among the things you'd want in the application output, since it's among the things I print in the console version. But of course you're free to use whatever you like. :-)
I'll use it if it's there. I use the ScoreLog to show data from past battles, and wasn't sure if confidence information would also be available from the ScoreLog after your updates. If not, I can keep using my own confidence calculator for past data.
Hmm. Well first off, I am pretty sure you should make sure you are using the [t-distribution], not the normal distribution. Using that, I would generate a confidence interval for each individual bot. I am nearly certain that there is a way to generate a confidence interval from the mean of several other intervals. I can't remember off the top of my head but I vaguely recall it being something like the square root of the sum of the squares of the standard errors (not standard deviations since the sample size is presumably fairly small). I'll tell you if I can find it.
http://www.hilemansblog.com/?tag=root-sum-of-squares and https://www.westgard.com/lesson35.htm#6
I didn't read through them carefully (kind of busy with school), but skimming through them quickly, it appears that the square root of the sum of the variances of the individual distributions is correct.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:RoboJogger/Calculating Confidence/reply (7).
I'm finally at the point where I am trying to actually launch RoboRunner. Currently running into an error I will have to debug. Posting part of the stack trace here in case anyone wants to comment.
Copying missing bots... 0 JAR copies done! Initializing engine: robocodes\z1... Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException at robowiki.runner.BattleRunner.initEngine(BattleRunner.java:66) at robowiki.runner.BattleRunner.<init>(BattleRunner.java:42) at robowiki.runner.RoboRunner.<init>(RoboRunner.java:172) at org.xandercat.roborunner.runner.RoboRunnerService.startRunner(RoboRunnerService.java:44) at org.xandercat.roborunner.runner.action.LaunchRoboRunnerAction.actionPerformed(LaunchRoboRunnerAction.java:46) at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
And the chunk of relevant code from RoboRunner:
System.out.print("Initializing engine: " + enginePath + "... "); ProcessBuilder builder = new ProcessBuilder(command); builder.redirectErrorStream(true); Process battleProcess = builder.start(); BufferedReader reader = new BufferedReader( new InputStreamReader(battleProcess.getInputStream())); String processOutput; do { processOutput = reader.readLine(); } while (!processOutput.equals(BattleProcess.READY_SIGNAL)); System.out.println("done!"); _processQueue.add(battleProcess);
Presumably, the input stream never provided the BattleProcess.READY_SIGNAL. I'll have to do some digging to figure out why. I'm not entirely clear on what the RoboRunner requirements are, but at the moment I am running it under Java 6 with Robocode 1.7.3.0.
I'll take a deeper look later when I'm home. At a glance, it seems like processOutput is coming up null - maybe the condition should be "processOutput != null && ...". What command are you using to launch this?
Looks like the problem was I didn't have one of the needed Robocode jars in the classpath. Thanks for including source in the RoboRunner jar; that made debugging easier.
Fixing the classpath fixed the problem I was having. Also, I am running RoboRunner via new RoboRunner(...) and then calling the runBattles() method. I need to dig a little deeper to determine how best to extract the battle results; at the moment it is just letting RoboRunner barf them on System.out. :-)
One thing is for sure -- it runs oodles faster than RoboResearch. I'm definitely switching. I suppose it may have been possible to branch RoboResearch to run battles a la RoboRunner, but I'm having fun building a new UI, so I'm continuing on.
Cool, good to hear! I don't think I kept any real test results of speed vs RoboResearch, but I think it was in the range of 20% less time for my bot / system. The smart battles stuff helps too, but it's hard to measure.
Similarly, I had long wanted to update RoboResearch to use the control API instead of launching external Java processes. When I started digging into it, it just looked easier / better / more fun to rewrite from scratch.
First page |
Previous page |
Next page |
Last page |