So I was testing some other stuff and decided while I was at it to bung some yatagan variants through my benchmarks.
Note, take the figures with a grain of salt as there can be up to 1% jitter and these are all quite close. Also my figures often show .5 to 1% higher than the rumble, but what I got was:
- jk.sheldor.nano.Yatagan 1.1.7 79.6%
- jk.sheldor.nano.Yatagan 1.2.1 78.8%
- nz.jdc.nano.Yatagan117a 1.0.0 79.4%
- nz.jdc.nano.Yatagan117b 1.0.0 79.0%
1.1.7 and 1.2.1 are your released versions, both scores align reasonably with the rumble and show 1.2.1 being down a little.
117a is a copy of 1.1.7 with the movement code change suggested above.
117b is a copy of 117a, but with the addition of setAdjustGunForRobotTurn(true);
Not sure what is causing the (relative) weakness in 1.2.1, it could be the change in energy detection, or it could be putting oscillate first in the table, or another table bug. Another possibility is the gun adjust. I have varying results with that and often find it hurts rather than helps the score. The result of the otherwise identical 117b being worse than 117a leads me to believe the gun adjust is a factor.
It looks like putting the oscillate first was the issue, 1.2.2 is tied with 1.1.7. I wonder if the
setAdjustGunForRobotTurn(true); would be better replaced by
setAdjustRadarForGunTurn(true); to cut down on the radar lock issues?
That seems to have made quite a large difference. I suppose I should not be surprised about a pattern matcher not liking gaps in its pattern. @#$!&* pattern guns, they are rather strong. I'm not sure even my cheaty dev version will have much margin over 1.2.3. I had better get back to the tuning. Or start praying for inspiration from saint Michael.