Difference between revisions of "Pris/VersionHistory"
< Pris
Jump to navigation
Jump to search
Darkcanuck (talk | contribs) m (contributions) |
Darkcanuck (talk | contribs) m (version 0.90 stats) |
||
Line 1: | Line 1: | ||
;0.90 - Faster with less bugs? | ;0.90 - Faster with less bugs? | ||
+ | * {{RumbleStatsDefault|link=http://darkcanuck.net/rumble/RatingsDetails?game=roborumble&name=darkcanuck.Pris+0.90|rumble=RoboRumble|scorelabel=APS|score=82.94|rank=16th|win=810|loss=6|plrank=6th|glicko2=2066.6|score2label=Survival|score2=91.14}} | ||
* First release in over 1.5 years! Trying to match the performance of 0.88 was a tall order, but exceeding it was even worse. Hopefully this proves to be a worthy release... | * First release in over 1.5 years! Trying to match the performance of 0.88 was a tall order, but exceeding it was even worse. Hopefully this proves to be a worthy release... | ||
* Goal is to improve PL score, APS may rise or fall. | * Goal is to improve PL score, APS may rise or fall. |
Revision as of 23:31, 16 August 2011
- 0.90 - Faster with less bugs?
- RoboRumble ‒ APS: 82.94% (16th), PL: 810-6 (6th), Survival: 91.14%
- First release in over 1.5 years! Trying to match the performance of 0.88 was a tall order, but exceeding it was even worse. Hopefully this proves to be a worthy release...
- Goal is to improve PL score, APS may rise or fall.
- Uses Rednaxela's FastTrig (technically the "recreate" version) plus a few other approximation functions gleaned from the web. Thanks to Rednaxela for providing this and to everyone who contributed to it. I've posted my own contributions to the talk page.
- Fixed a number of hard-to-find surfing bugs at a serious cost to my sanity.
- Features a refactored state management system, which probably introduces newer and harder-to-find bugs.
- Updated movement simulation code to use Robocode 1.7.3.0 routines.
- 0.88 - Death to MirrorMicro
- RoboRumble ‒ APS: 82.69% (17th), PL: 810-6 (6th), Survival: 91.31%
- Added a new movement network input based on the Guess Factor currently being visited (i.e. if the enemy fires targeting waves every tick, this is the GF of the currently breaking wave when the enemy fires).
- This should improve performance against MirrorMicro to about 80%, rather than 45%!
- Many thanks to Positive and Skilgannon for pointing this out.
- Incidentally, I implemented this without using waves...
- Added in better GF0 and circular aim avoidance before the NN has enough training data for surfing.
- Fixed a few misc bugs and probably added even more.
- 0.86 - 2 Networks
- RoboRumble ‒ APS: 82.24% (18th), PL: 735-6 (5th), Survival: 91.42%
- Added a second network with hidden inputs for improved surfing danger values.
- Movement network training now uses an exponential moving average scheme to combine old data with new.
- Added a "circularity" input to movement network.
- Tweaked aiming method for more precision (thanks to rsim's BulletCatcher for making me notice this); still using Gaff's older gun.
- Switched to a better bullet power strategy rather than the ancient one accidentally included.
- 0.84 - Neural Surfing
- RoboRumble ‒ APS: 80.77% (33rd), PL: 731-11 (7th), Survival: 89.68%
- Another dev. version, this one was scoring as high as the latest (unreleased) version of Holden, so out into the rumble it goes!
- Uses latest version of Holden's wave surfing but with a twist: danger values are generated by a neural network. Many of the concepts used are similar to Gaff's Targeting, although training was (and remains) a tricky problem to solve.
- Framework updates gives this version anti-ramming and lots of bugfixes/refinements since 0.82.
- Still uses older version of Gaff's gun (ie. the published one)
- 0.82 - Surfing Learner
- RoboRumble ‒ APS: 72.39% (74th), PL: 647-91 (72nd), Survival: 82.22%
- Introduced a variation on Wave Surfing which used the current state and wave danger as inputs to a Reinforcement Learning algorithm.
- Combining WS+RL needed a lot more work and didn't seem to make sense given how straightforward danger evaluation can be. CPU time might be put to better use learning other heuristics (eg. bullet power selection).
- Retired 27-Aug-2009
- 0.36c - Retrained & repackaged
- RoboRumble ‒ APS: 69.76% (91st), PL: 653-73 (69th), Survival: 80.58%
- Same as 0.36, but freshly trained and packed with care
- 0.36 - Bullet power experiment + quick fix
- Quick fix of the 0.34's problem using the latest experimental version on hand
- Ranked 129 in RoboRumble -- 63.77 APS, 1537 ELO, 1786 Glicko2, 2119 battles (8-Jun-2009)
- Scored lower by 5-10% versus same bots as benchmark, so may have mis-packaged this version
- 0.34 - Expanded learning
- Added many new learning inputs including hit counts and movement history
- Considers more movement options
- Dropped exploration/randomness way down
- Scores 1% higher than Gaff on benchmark test
- Ranked 167 in RoboRumble -- 62.62 APS, 1526 ELO, 1769 Glicko2, 1237 battles (7-Jun-2009)
- Forgot to comment out code specific to 1.6+ versions of Robocode and received some 0 scores from 1.5.4 clients.
- 0.20 - Development release
- First version to score in the same neighbourhood as Gaff in my 1v1 test bed.
- Ranked 99 in RoboRumble -- 69.02 APS, 1619 ELO, 1857 Glicko2, 2008 battles (5-Jun-2009)