Leon
Leon | |
Author(s) | Darkcanuck |
Extends | AdvancedRobot |
Targeting | Neural Targeting (PM) |
Movement | Reinforcement Learning |
Released | late 2007 |
Current Rating | 1676 (35th) |
Current Version | 1.01 |
Code License | closed |
Download |
Background Information
- What's special about it?
Leon is my first public bot, based on a project in machine learning for a grad course I took in the fall of 2006. Leon uses neural networks for enemy movement prediction and the movement is based on a reinforcement learning algorithm.
- How competitive is it?
Currently in the top-40 of the Melee Rumble.
Strategy
- How does it move?
Leon picks a movement vector every tick using a reinforcement learning algorithm. I'll probably post more details later, but the algorithm looks at the strength and position of the nearest enemies; rewards are roughly based on scoring, with positive rewards for dealing damage and survival, negative rewards (punishment) for being hit or dying. It's based on continuous reward SARSA using linear feature approximation, if that means anything to you.
There's also a basic dodging algorithm that estimates the location of the nearest incoming bullet (similar to a crude version of wave surfing) and excludes all actions that would result in a collision.
- How does it fire?
Leon learns enemy movement patterns using a neural network for each enemy. An iterative algorithm then predicts future positions for the enemy being targeted and fires where the bullet and victim line up. This is probably similar to ScruchiPu?
- How does the melee strategy differ from One-on-one strategy?
Leon's movement has been trained specifically for melee. One of the inputs to the learning algorithm is the number of remaining opponents, so 1-on-1 movement will be slightly different than a crowded battlefield. But I don't think his targeting is up to par with a good 1-on-1 bot.
- How does it select a target to attack/avoid in melee?
Selects the closest target, with some protection against target thrashing.
- What does it save between rounds and matches?
Between rounds, saves all neural network weights, targeting stats and bullet dodging data.
Between matches, saves only the reinforcement learning parameters. Leon should learn very slowly over time; however, fast learning is done pre-release using a 500-round testbed of melee bots. If you wipe out his battledata.ser and basedata.ser files, he should start re-learning (fast parameters) his movement from scratch. I need to find a better way to preload the data though.
Additional Information
- Where did you get the name?
Blade Runner, of course.
- Can I use your code?
No, sorry. Leon's predecessor was developed for a course I took which is still offered and I don't want to give anyone an unfair advantage. But if you want to know more, ask away.
- What's next for your robot?
- Tweak RL movement and NN targeting to improve performance.
- Reduce the number of skipped turns in the first round (due to NN-learning), approx 25% skipped right now.
- Much faster thanks to some tips from Skilgannon
- Needs a complete overhaul since I broke many of Leon's classes with the CPU constant problems in Robocode version 1.5.0-1.5.3
- Does it have any White Whales?
SandboxDT seems to always cause trouble in my tests. And Shadow, of course, but isn't that every melee bots' white wahle?
- What other robot(s) is it based on?
My own creation. The similarity with ScruchiPu's targeting is accidental. Many tidbits of Robocode wisdom were gleaned from the wiki, of course.
Version History
- 1.0: Initial release
- 1.01: Sped up NN math by dumping Math.pow() for Math.sqrt() and simple x*x where possible. Rumble ratings fixed with this release.