Hi guys !
My name is Bart. I am currently working on my B.Sc. thesis, which is devoted to applying genetic programming to Robocode.
The whole idea behind it is to:
1. Design few so called tactics (complete way of behaviour, like gun, movement and radar altogether, that is supposed to fit some particular situation on the battlefield).
2. Find conditions, which will give us information, when using a specific tactic will be more beneficial than using the other ones(e.g. enemies count, the distance to the nearest enemy and so on).
3. By harnessing Genetic Programming evolve a controller, which will be able to choose the tactic that fits best the current situation on the battlefield.
What I've done so far, was to shamelessly take some of the best ideas from robowiki and transform them into tactics.
For instance, I ve got tactics mentioned below:
1. completeSurfer (WaveSurfingRadar, GuessFactorGun, WaveSurfingMovement)
2. meleeCircularGun (LockedGFradar, CircularTargetingGun, meleeMovement (minRiskMovement),)
3. meleeGFGun (LockedGFRadar, MeleeGFGun, meleeMovement)
4. ramfire (LockedGFRadar, CircularTGTGUn, ramMovement)
5. randEscapingCircularGun (LockedGFRadar, Circulartgtgun,randomizedEscMvmnt)
6. MeleeStationaryTgt (LockedGFRadar,StationaryTgtGun,MeleeMovement)
7. ramSilentGun (OldestScannedRadar, SilentGun, RamMovement)
8. AmbushEscSilentGunBehavior (OldestScannedRdr, SilentGun, AmbushEscapeMovement)
I've also written genetic programming algorithm, which invokes robocode environments with the same battle template. Then it simply takes best individuals from those battles, in which my robot had the best overall result and makes them have babies :)
The problem I've come across is that my controller after the evolution process eventually learns to use only the best tactic (which is MeleeCircularGun). What I would like to achieve, is to firstly find distinctive situations on the battlefield, when different tactics may have their application. Secondly, I would like to write those tactics and to make my controller actually learn when to use them. Or, to give a simple example, I am after a strategy, which consists of e.g. two tactics. Each of them is applied in different situation. That strategy should be better than the strategy which uses only one of those tactics.
Could you be so nice as to give me some directions, which situations on the battlefield may require different tactics? What those tactics could look like? What should I take into account? Maybe you've discussed similar matter, but I haven't found it? And finally, is this problem well set and in your views is it achievable?
Btw, sorry for my english, I am not a native speaker :) I hope that what I've written is informative enough, though.
Thanks a lot,
Hey, and welcome! Sounds like a cool project. Genetic programming of Robocode bots is certainly achievable and a few others have explored it. There are several listed at Robocode/Articles, the most recent example being this paper / this video.
There are lots of ways to approach it. I think your current approach will lead to stronger bots because you have very sophisticated parts, but as you've noted, you don't have much variety. As for analyzing situations, I'd take a look at how Robocode guns work - GFTargetingBot is a good example. You can split distance up into 4 segments and speed into 4 segments to have 16 different situations, for instance. GFTargetingBot tallies where the enemy moved (relative to him) in each of those situations and fires at the most common destination. Maybe a bot's genes decide which gun/movement/radar it uses in each of those situations. I'd probably add variants of the parts, too, such as different factors in the minimium risk movement or the attributes used in the gun.
Personally, I'd probably start by focusing less on learning bots and more on the evolution aspect with simple parts/variants. Once you have that down and understand Robocode better, evolving bots that learn might come easier to you. Good luck, and let us know how it goes!
--Voidious 15:57, 13 January 2011 (UTC)