Difference between revisions of "User talk:Simonton/PFResearch"

From Robowiki
Jump to navigation Jump to search
(Variance after 75 rounds)
(score variations and ideas)
 
(One intermediate revision by one other user not shown)
Line 23: Line 23:
 
* 0.63 difference with CassiusClay
 
* 0.63 difference with CassiusClay
 
* 0.13 difference with Chalk
 
* 0.13 difference with Chalk
\*sigh*.  I wonder how many good things I have thrown out or bad things I've kept.  --[[User:Simonton|Simonton]] 03:14, 19 September 2008 (UTC)
+
<nowiki>*</nowiki>sigh*.  I wonder how many good things I have thrown out or bad things I've kept.  --[[User:Simonton|Simonton]] 03:14, 19 September 2008 (UTC)
 +
 
 +
The big Ascendant difference I have noticed myself - I think it has something to do with A's permanent antisurfer-gun decision. The Waylander results I'm not sure about, however I do know that this particular version of Waylander could rebuild data across rounds, complete with across-round delta-heading. So it sortof tends towards shooting at GF 0 more often than it should. The latest Waylander doesn't have this weakness, so perhaps for the MC2K9 it would be a good idea to update that. Additionally, I think it would be good to have more than 1 bot in the PM section to even out their contribution to the total score, so perhaps a PM that matches and rebuilds data based purely on lateral-velocity? There are quite a few nanos that do this. Perhaps WeekendObsession with a limit on how big the string grows? --[[User:Skilgannon|Skilgannon]] 19:02, 20 September 2008 (UTC)

Latest revision as of 20:02, 20 September 2008

Vibrate Instead of Stop

0080 is a test of something I've been curious about for a long time. How much difference would it make, instead of stopping, to vibrate slightly, so that 1/2 the time you can accelerate the direction you choose starting at (almost) speed 2, instead of 1. For those who are counting, that puts you (almost) 7 pixels ahead of the competition by the time you both reach full speed. That's not much. In the MC2K7, my scores changed like this (run with 75 seasons):

  • .18 drop against Splinter & .38 drop against GrubbmGrb. How could doing this hurt my score? Is there this much error in scores at 75 seasons there can still be a .38 variance? It makes me think I wasn't precise-predicting correctly, but I'm pretty sure I already have enough checks in place that will spew out errors when such things go wrong. I'll update here if I do find there was a bug.
  • .02 gain against Waylander, .18 gain against GrubbmThree, & .37 gain against Chalk. It must have made no difference against these bots. I could believe the gains against GrbbmThree and Chalk were real, if it weren't for the fact that the same drops were experienced against Splinter and GrubbmGrb.
  • 1.01 gain against RaikoMicro, 2.55 gain against Ascendant & 2.05 gain against CassiusClay. I must believe these are real gains beyond the margin of error. They are major gains. I suspect it messes with their segmentation (time-since-direction-change comes to mind), and that is what explains the gains.

This experiment has certainly been interesting! I'll be running another 75 seasons of one of my bots, to see how much variance there is in the scores. --Simonton 23:06, 17 September 2008 (UTC)

I'm thinking the minimal difference against GrubbmThree is due to the fact that you don't really ever stop. The gains against Raiko, Ascendant and CassiusClay I believe are due to you being able to spread out your location over a wider area, so even random targeting can't hit you as easily. Nice work! --Skilgannon 14:44, 18 September 2008 (UTC)

Really? I find it hard to believe gaining only 7 pixels of escape every once in a while (50% of the time I actually stop at a wave, which is ... I dunno ... 10-20% of waves?) could make more than a slight difference. --192.88.212.34 15:45, 18 September 2008 (UTC)

Hmm, well I suspect that the gain is more due to making some segments less useful (the low-velocity segments, and the time-since perhaps), than increasing the escape area. --Rednaxela 17:45, 18 September 2008 (UTC)

The results are in. After three different rounds of running the same bot for 75 seasons, there was:

  • 0.02 difference with HawkOnFire
  • 0.93 difference with Splinter
  • 0.19 difference with GrubbmGrb
  • 1.47 difference with Waylander
  • 0.29 difference with GrubbmThree
  • 0.21 difference with RaikoMicro
  • 2.11 difference with Ascendant
  • 0.63 difference with CassiusClay
  • 0.13 difference with Chalk

*sigh*. I wonder how many good things I have thrown out or bad things I've kept. --Simonton 03:14, 19 September 2008 (UTC)

The big Ascendant difference I have noticed myself - I think it has something to do with A's permanent antisurfer-gun decision. The Waylander results I'm not sure about, however I do know that this particular version of Waylander could rebuild data across rounds, complete with across-round delta-heading. So it sortof tends towards shooting at GF 0 more often than it should. The latest Waylander doesn't have this weakness, so perhaps for the MC2K9 it would be a good idea to update that. Additionally, I think it would be good to have more than 1 bot in the PM section to even out their contribution to the total score, so perhaps a PM that matches and rebuilds data based purely on lateral-velocity? There are quite a few nanos that do this. Perhaps WeekendObsession with a limit on how big the string grows? --Skilgannon 19:02, 20 September 2008 (UTC)