Viewing a history listing
I take it as a given that we all choose the wrong gun pretty frequently. I use precise intersection and normalize hits by distance and MEA. So a hit percentage of 12% from distance 400 is rated the same as 6% from distance 800, for instance. I think this is all pretty optimal, but we still just don't have much data. There's the option of firing a virtual bullet every tick, but this seems inaccurate.
I got the idea to not just fire a virtual bullet for the best angle produced by each gun, but also for 2nd best, 3rd best, etc, with progressively lower weights into the VG stats. I felt this had a lot of promise and I spent a few days tweaking various configurations, disappointed to find no improvement in my test bed. But I liked the idea enough that I figured I'd give it one shot in the rumble before moving on. I'm not convinced this isn't random fluctuation or a side-effect of the Anti-Surfer gun changes, but 0.1 APS improvement: . Anyway, just thought I'd share. Maybe I'll revert the VG changes in .14 for a direct comparison.
In my humble opinion the best virtual gun array is no virtual gun array. It add's a great deal of complexity for marginal benefit in most cases. Just try to make the best gun better.
In order to hit both DrussGT and RandomMovementBot12345, you're going to make a trade-off somewhere, whether it's by adding a VG or by sacrificing performance against one to hit the other when tuning a single algorithm. I don't consider it that complex. Improving my Anti-Surfer gun is what got me beating Shadow, which is reason enough for me to have one, and more than a "marginal benefit" in my eyes.
Hmm, while such a trade-off has to exist in some form, a VG array is not the only way to handle minimizing that sacrifice.
The primary trade-off in the "DrussGT and RandomMovementBot12345" case is the question of whether old data should be kept. In some things like a NN targeting system, you could have methods of adjusting that learning parameter besides VG, based on some characteristics of the data. That's just one example, and there are many other ways the sacrifice could be minimized within a single gun. I think that is something that hasn't been nearly as explored as it could be.
Anyway, on the topic of "multiple virtual bullets", I have tried something like this before, except I took it to the extreme. What I did in some versions of RougeDC is equivalent to what you suggest except with infinite differently weighted bullets. It integrated the result across the angles.
My experience is that this method was much faster adapting than the conventional single bullet approach. It was very useful against surfers that you need to adapt quickly against (i.e. swapping targeting systems when they've learned one too well). The disadvantage, is that it converges to a less accurate result.
That's interesting to hear. I definitely considered some extreme cases - for instance, for the hit angle, take the normalized score for that angle (eg, kernel density / max kernel density) as the amount of a virtual bullet hit to count it as. Is that something like what you did? Obviously that's biased towards a gun with a pretty flat graph, but even after normalizing I'm not sure if it would favor one of my guns, which have very different looking graphs.
I have a hunch that this type of system should break down (ie, become so noisy that it's counter-productive) very quickly beyond the first few angles - if it helps at all in the first place. But, not being sure on the theoretical level, I definitely felt that on the experimental level, if I couldn't find benefit in using only 2nd/3rd best angles (which also by far carry the most weight), I would not find it by going beyond that.
In any case, I found no improvement in anything I tried, including the variation I'm trying right now that I thought fixed a flaw in 1.6.13's setup. So I'll probably try 1.6.14 with the old VG and see how that goes. I do still think there's potential here.
I've had a problem with the way we normalise our data - I think it should be normalised so that the area underneath the graph is 1, not the maximum value is 1. This would make this technique more useful, as you could simply integrate each gun over the hit area and use that as its hit ratio. I've also had the idea of dividing all my VG hit ratios by the random hit ratio - so that the gun hit-ratios are measured in a useful metric that adapts across bots. But back to designing my genetic gun-weight tuning algorithm!
Yep, I do normalize against the random hit ratio, in effect, by scaling with distance and precise MEA whenever I log a hit. Regardless, 1.6.14 looks well within the margin of error of 1.6.13, which is in line with all my tests of the multi-bullet VG, so I'm happy to revert a bunch of worthless code. =)