DC and VCS
← Thread:Talk:Gilgalad/movementStrategy/DC and VCS/reply (11)
I feel like "weighting everything equally" is almost an impossible notion. Like say you divide lateral velocity (range 0-8) by 8, and distance by 1000, and now you have them both in the range of 0-1, weighted "equally". But the lateral velocity is effectively weighted higher since the distribution is so much more spread out across the full range than distance (which is rarely 0 or 1000, and very frequently 400-600).
I've had some success with genetic algorithms for tuning attribute weights (with WaveSim), but still not a huge improvement on just hand tuning. The GA stuff is fun to play with though... Got some experiments running right now, in fact. ;)
I'm planning on some GA-esque searches on my weights for tuning down the road with deBroglie. How do you set up a nice loooooong set of 35 round Robocode battles for this? RoboResearch?
RoboResearch is certainly the tool of choice for running long sets of tests, but you'd really need your own layer of code on top of it to do any GA stuff. I'd instead just use the Robocode control API (which is pretty easy) for running the battles and collecting the results from your GA code, which is probably easier than manipulating RoboResearch.
The issue I haven't tackled yet is how to actually export each version of the bot to test. You'd have to package it from code with an Ant task or something, or export a .properties file that the bot reads in (probably the route I'll take).
All my GA stuff so far has used WaveSim, but I want to use some real battles to rewrite RetroGirl's movement. Keep in mind that GA with real battles will be VERY slow... I've had GA runs that take many days using WaveSim, and WaveSim is orders of magnitude faster than real battles.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:Gilgalad/movementStrategy/DC and VCS/reply (14).
I was thinking that a bot could read/write data from its local file directory. Read the file as "parents," select parents via fitness function, apply crossover/mutation.. load those values as weights, fight, write result at end of battle.
Lather rinse repeat for thousands of battles. Maybe start culling the least fit members of the population once it gets large enough...
This relies on the bot itself to determine its own score though... iffy.
Ah, yeah, that's an interesting idea. In 1v1 you should be able to determine the score. But I know I have a lot of other state associated with the GA stuff itself, so it would be some extra pain to have to import/export that every battle. You'd also miss out on any chance to multi-thread it.
Well, my weighiting them equally idea was only for the movement, for the gun I definitely don't think that will help. (But I'm not sure how much it will hurt.) For VCS movement, each dimension needs to be in a certain range for the gun to use the data from that point. So I would say that each attribute was equally important because the gun needs to match each attribute. (Assuming there was no secondary buffer to go to if there were no matches in a given segment) That being said, because VCS is a binary test, while KNN is a search for the closest point, a normal KNN can't be expected to work exactly like different (but similar) VCS systems. Thinking about it more, it seems like you're right about equal weightings being more complicated than weighitng everything 1.0, but I'm not sure how to deal with that. My initial guess would be weight = 1 / probability_randomly_distributed_situation_in_bin. Which would mean that the weight should be equally to the number of bins used for each attribute.
One problem I have with weighting the distance (or bullet travel time) dimension higher is that, as you said, the distances will mostly be in a range of 400-600. If I were using say one point and I had the option of distance = 450 and velocity change = 0 or distance = 499 and velocity change = 1 with the current situation as distance = 460 and velocity change = 1, the best choice is probably the second point, but weighting the distance higher may favor the first.
I have been wondering for a while how much of Gilgalad's ranking is due to the weighting of KNN and how much is due to a fairly good gun system and wave surfing algorithm, so I am puting a version with all weights (including gun and flattener weights) set to 1.0. (Actually, I forgot to change the weights on the enemy bullet power estimation to 1.0, but I currently only use that for onWin events so it shouldn't make a big difference)
More accurately, for VCS, I would guess attributes don't have a level of "importance" (in the algorithm itself, using some is more important than using others), you just test how much you can segment the data while still having enough data, and choose arbitrary cutoff points, which is probably why I find KNN more intuitive. But I'm still trying to think of a proof for an optimal similarity between a VCS system with evenly distributed data accross all dimensions, with a set number of divisions in each dimension, but the divisions being started arbitrarily while being evenly spaced (ie. any number between 0 and 100 is the start of the first distance bin, each bin covers 100 units, and we pretend the points can't be less that the minimum value)