View source for User talk:Wolfman
|Thread title||Replies||Last modified|
|Weird Robocode Bug||2||22:13, 24 November 2017|
|Packaging A Robot To A Jar from the Command Line?||22||12:56, 15 December 2013|
|RoboRumble||4||15:14, 27 November 2013|
|Entry into RoboRumble||0||22:52, 26 November 2013|
|Bot Size||3||19:12, 26 November 2013|
|Fixing bugs ... reduces score?||5||13:48, 22 March 2013|
|Welcome back!||1||20:03, 16 March 2013|
I've been getting a weird bug in Robocode where it looks as though my bot does not run during a round once in a while. I'm getting no exceptions, no output.
In fact, Robocode itself is not even printing out the usual stuff. Any ideas? I've never caught this happening when I'm looking at battles run, I've only noticed it happening when its running fast minimized.
Here are the outputs from the bots. Notice that if a bot dies the system will output to that bot's log. But my bot is not getting output for that round, and none of my event handlers are getting run (so no profiling output from my bot for the end of the round)
Round 13 in these logs is the offender
========================= Round 12 of 35 ========================= SYSTEM: Bonus for killing wiki.mc2k7.HawkOnFireOS MC2K7: 6 SYSTEM: rdt.AgentSmith.AgentSmithRedux* wins the round. PROFILING: Main Loop Avg 8.542780876159668 Min 6.333148956298828 Max 17.61117172241211 Total 41321.4296875 DangerPrediction:Update Avg 8.52608585357666 Min 6.329133033752441 Max 17.381799697875977 Total 41240.68359375 Evaulate Avg 0.004348999820649624 Min 0.0017849999712780118 Max 1.6774460077285767 Total 25665.873046875 Evaulate Targets Avg 6.150000263005495E-4 Min 0.0 Max 0.15083099901676178 Total 3632.769775390625 EvaluateBulletDistances Avg 0.0012880000285804272 Min 0.0 Max 0.19277900457382202 Total 7604.6123046875 GeneratePredictedPositionsAndAssociatedData Avg 0.0017930000321939588 Min 8.919999818317592E-4 Max 1.6698600053787231 Total 10582.779296875 ========================= Round 13 of 35 ========================= ========================= Round 14 of 35 ========================= SYSTEM: Bonus for killing wiki.mc2k7.HawkOnFireOS MC2K7: 7 SYSTEM: rdt.AgentSmith.AgentSmithRedux* wins the round. PROFILING: Main Loop Avg 8.541385650634766 Min 6.333148956298828 Max 17.61117172241211 Total 47353.4453125 DangerPrediction:Update Avg 8.525357246398926 Min 6.329133033752441 Max 17.381799697875977 Total 47264.58203125 Evaulate Avg 0.004350999835878611 Min 0.0017849999712780118 Max 1.6774460077285767 Total 29429.59375 Evaulate Targets Avg 6.270000012591481E-4 Min 0.0 Max 0.15083099901676178 Total 4240.9716796875 EvaluateBulletDistances Avg 0.0012890000361949205 Min 0.0 Max 0.19277900457382202 Total 8719.4609375 GeneratePredictedPositionsAndAssociatedData Avg 0.001782999956049025 Min 8.919999818317592E-4 Max 1.6698600053787231 Total 12063.5009765625
Notice no system output for round 13.
========================= Round 12 of 35 ========================= SYSTEM: wiki.mc2k7.HawkOnFireOS MC2K7 has died ========================= Round 13 of 35 ========================= SYSTEM: Bonus for killing rdt.AgentSmith.AgentSmithRedux*: 11 SYSTEM: wiki.mc2k7.HawkOnFireOS MC2K7 wins the round. ========================= Round 14 of 35 ========================= SYSTEM: wiki.mc2k7.HawkOnFireOS MC2K7 has died
Can't understand the cause or reason for this. Is it my bot? Is there some robocode bug?
Which version of robocode are you using? It seems that it’s a known bug of robocode 22.214.171.124, which is fixed in 126.96.36.199
Is it possible to package my robot from the command line? I see the RoboRumble participants page specifies that the bot needs to be in a jar with a botname.properties file, and the usual way to do this is via the package menu in the client. Is it possible to do this from the command line?
Is it as simple as running the standard "jar" command, passing in the source files (and the botname.properties file)?
It's definitely possible, but I haven't done it, besides manually unzipping / editing / rezipping sometimes. I think Beaming does it with normal development tho so maybe he can comment on his setup.
It is just a normal JAR with your class files + the .properties file. So whatever means you have of creating a JAR should work.
I use standard make file which you are free to modify, see the EvBot source link to github.
Essentially, it does the following First compiles all relevant .java files to .class and move them to special folder 'out' for ease of the followin jaring.
javac -d $(OUTDIR) -classpath $(ROBOCODEJAR) YOU_JAVA_file.java
Next you need to modify YOUR_BOT.properties file. Change version variable accordingly and you need to generate new UUID and put in the file. I am not sure that robocode actually uses it, but this is fairly easy. I do it with a simple sed script. Put this new file into $(OUTDIR)/$(SUPERPACKADE)/YOUR_BOT.properties file. Here $(SUPERPACKADE) is your rumble author designation. For my case, $(SUPERPACKADE) is 'eem' and YOUR_BOT is EvBot
Next you need to jar all compiled files and the .properties file. Since we moved it all into $(OUTDIR) at previous steps, all you need is to run
cd $(OUTDIR); jar cvfM ../NAME_OF_JARED_BOT.jar `find $(SUPERPACKADE) -type f`
This is it. You are ready to upload the bot for rumble or to test it locally.
I would suggest to use my Makefile which comes with EvBot, it automates all of above, and does separate packaging of test and releases according to the git tags.
Let me know if you still have questions.
Awesome, two different things to try. Cheers Chase, Beaming. I'll probably try Beaming's method first because it doesn't rely on installing Ant.
FYI, i'm not sure I'll have time to do it (i'm expecting my first child in the next week! :o), but i'm planning on making a web interface for my Raspberry PI where I can upload my source files, and it auto-compiles the jar and runs RoboRunner, pasting results back to the website. A bit like RoboRumble but for personal use - so I can keep track of results of improvements of different versions for TCRM etc. And do genetic algorithm tuning of the bot, running it on my PI so I can leave it going for a week. :)
If you use Eclipse it comes with Ant. Ant is similar to Make in many ways, except it can interact with Java stuff on a deeper level (then Make can).
Happy expectations! Do not forget to report motion and targeting algorithms, and code size of your baby :)
You will also have no sleep in the next 3 months. Nevertheless, parenthood is fun and recommended activity to keep us all in sync with reality :)
I personally, attempted Ant a few times, but I do not understand its logic, and config is to wordy to be human generated. So old but proven to be good 'make' is my favorite. I never mastered these fancy IDEs as well. The best I saw was Borland C with v2.something, and than I never looked back at them.
I think Raspberry is quite low on CPU power, from other hand you will not be able to check it to often, so it might work.
Ant proposes to do the same thing Make does, but with XML syntax instead, which is more portable and more friendly to manual editing. XML is a lot more forgiving on indentation and blank spaces.
Despite my (slight) efforts, I still do not understand Maven. I really should read up on it some time.
I have to say, not too much of a fan of Ant personally. Comparing Makefiles and Ant XML I'd agree the XML is more portable, but I'd consider the Makefile to similar in terms of friendliness for manual editing. On one hand Make is annoyingly picky about certain whitespace, but on the other hand I find some of the boilerplate string repetition in XML to make things less human-readable, especially when not using a syntax highlighting text editor.
(Then again, part of my not being a fan of Ant may be on account of having seen Ant badly abused as a general-ish purpose scripting language. In other words... being forced to write certain kinds of scripting flow control in Ant XML can probably make someone start to dislike Ant...)
Sounds cool, I'd be curious to see what you come up with! But yes, Raspberry Pi is very slow if you're planning on actually running battles there. BerryBots single-threaded runs 20x-30x faster on my 2009 MacBook Pro than on stock Raspberry Pi. Also, last I checked, Java is a huge headache on the Raspberry Pi, but maybe that's improved by now.
Good luck next week! :-)
I looked at getting it running on my PI a few months ago but the latest Raspbian distro has java pre-installed. I've not tried it yet but I imagine it will be fine. And yes I know the PI is really low powered, it will make up for it in the amount of time I can just leave it slowly churning away while I am out of the house. I don't like leaving my 500w desktop running while I'm not around. The PI uses about 5 watts of power, but I don't think its 100 times less powerful ...
Anyway, if you get one running, you can buy another 10 for less than a single desktop and have your own PI rumble server farm! ;)
But for the cost of 20, I got a quad-core Core i7 with 16 gigs of RAM. :-) So I can run 5-6 threads, each 20x faster than the Pi.
If you're curious, there was some talk on Raspberry Pi about some of the issues people hit, though who knows how much has changed in a year+.
How do the battles work on Robo Rumble? Do bots play X consecutive rounds against each other or is each paring just one round?
I just want to know if I need to save data at the end of a battle and load for the next battle, or whether there is enough time to learn from scratch each pairing? If its only one round per battle I can't imagine there is much time to populate GF arrays etc?
If I need to save data, do people find problems with the number of bots meaning the data saved has to be very small?
Lastly how do people tend to save data from KNN trees? Or do they not bother?
There are 35 rounds in a standard roborumble battle/pairing. Battles are first done with the priority of filling out the missing pairings, but additional ones after are still done.
From round to round, the proper way to keep data is via use of a static variable in your robot. This is done in almost all robots.
Not that many robots in the rumble save data from battle to battle, and those few that do to my knowledge are usually saving VCS arrays (the amount of data does have to be small per opponent). Saving data between battles is kind of problematic in some ways, because that memory is completely separate between different rumble clients. This means that how the bot ranks, will depend on the number of battles fought per opponent per rumble client, which means it's ranking is less stable and can make it more difficult to evaluate whether a change you made was actually an improvement or not. For that reason, data saving between battles is not worth it IMO. (I'd even go so far as to say that personally I think data saving shouldn't be allowed in the Rumble because of how it makes scores less stable, but that's just my opinion and it is technically allowed)
Ok cool, I wasn't sure if a pairing was one round. 35 makes more sense as thats plenty of time to learn movement & targeting. My bot does save data during a battle between rounds using static data but it doesn't currently save it between battles.
I'll prioritise fast learning over saving data to disk for the moment although I could probably save virtual gun hit rates at a later point.
Whats the consensus on pre-stored data against various opponents? Its that considered bad form or a reasonable plan?
The consensus on the pre-stored data kind of "allowed, but considered cheesy". We have one such example in the Nanorumble (See LBB), and it's generally considered to be an amusing novelty rather than a serious competitor because it does this.
In the upper ranks of the megabot rumble, it would probably be considered bad form unless it's a one-off experiment of "let's see how well this does", which would be considered interesting instead of bad form.
Just added a very early version of my new robot to the roborumble, just to get a baseline. Its dead simple though its mega bot because its extensible and flexible framework to build on.
If I've got anything wrong setting up the bot on the roborumble participants page can someone tell me so I can fix it! Ta!
Getting back into this (again!) Had a few questions on bot size.
Couldn't find the info anywhere through the search so apologies if this has been answered else where but what is the bot size?
Obviously the getWidth and getHeight functions return 36, but is this the full width from one edge to the other, or is this from the centre to the edge?
Also is the hit box of the robot axis aligned or object aligned? I.e as the bot rotates does the bounding box rotate meaning the maximum width of the robot is in fact sqrt((36*36)+(36*36)) = ~50.91?
Or ... is the collision detection of bullets etc a simple radius check?
Lastly some of the examples on pages hardcode the width of the bot to 16 pixels - obviously incorrect - see the http://robowiki.net/wiki/Linear_Targeting page, the section Exact Non-Iterative Solution defines the robot width as 16 ... and then proceeds to divide the width by two which seems doubly incorrect?
The bot is 36x36 and doesn't rotate, always axis-aligned. The 50.91 sounds right (and I remember 25.xx as the max half width). Last point of note is that each tick, bullets advance, then Robocode checks for collisions, then bots move.
Not sure about the 16 - I'd guess some Minibot-ism at play.
You do not have permission to edit this page, for the following reasons:
- The action you have requested is limited to users in the group: Users.
- You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.
You can view and copy the source of this page.Chase
Return to Thread:User talk:Wolfman/Bot Size/reply (2).
So I fixed several bugs I found in AgentSmith when I added a load of debug output to my bot ... and I discovered my TCRM score reduced by 2% over 30 seasons. Sigh. Do you prefer bug free robots or higher scores? :)
We had an extensive discussion about this at one point. In the end I think most of us felt a bug free robot was better then a slightly better scoring buggy robot. Since it made it easier to improve its score later, etc.
I think Voidious mentioned he figured out why the bug caused a score improvement and reintegrated it into the robot in a controlled way.
What I do is try to figure out what effect the bug was having, and understand why it caused better results. Then I try to add that effect back in a 'legitimate' way.
An example is when I gained score by removing the variable-bot-width, which accounted for the extra area the bot covers in a wave if it is moving while the wave crosses. I later added in precise-intersection code, which gained me more score, despite (theoretically) doing roughly the same thing as what had cost me score previously. I chalked it down to my previous method not being accurate enough.
If this was a bug in a GF gun that helped you only against wave-surfers, I'm not particularly surprised. After all, you were using something that they weren't designed to dodge.
No it was several things and im only working on the random movement bots at the moment as wave surfers are a whole different thing.
What I fixed was:
- My waves were 1 tick behind the bullet
- Setting the gun angle was 1 tick behind the angle calculation rather than using the latest calculation
- My automatic weighting was not taking into account the target bots rotation direction.
Make of that what you will! I'm going to keep the fixes in - as you say it should make improvements in the future easier. Although im currently struggling to find any improvements in the gun at the moment and its way off the leaders TCRM scores. :(
I guess im too much of a critic and want to be up at the top right away. Or at least better than average.
Interesting point about the bot size. I'm using the Math.atan(18/distance) * 2 to get the width of the bot. Hrm!
That first one is pretty normal, I do that in my bots because of the physics involved:
- the bullet moves
- collisions are tested
- the bot moves
- we see where everything is.
So our wave should be a tick ahead of the bullet because when the collisions happen that's where the bullet is. You could also do it by having the enemy back one tick.
I found there was some gain in calculating precise, simulated GF-1 and GF1 instead of just using asin(8/bVel), so that I never shoot outside of where the enemy could be given the setup of the situation, eg walls, heading is not perpendicular, etc. Also, to do a better representation of the bot-size as the wave passes over, check out Waves/Precise Intersection.
Welcome back dude! :-) You might find User:Voidious/History/Innovations since 2005 an interesting recap of some of the biggest advancements of the last few years. For all the talk of "no breakthroughs since Wave Surfing", bots have continued to get a heck of a lot stronger. DrussGT may excel in crushing the weak, but he's just as scary head to head.
Good luck with AgentSmith!