Thread history

From Talk:Dynamic Clustering
Viewing a history listing
Jump to navigation Jump to search
Time User Activity Comment
15:19, 18 March 2013 Voidious (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
15:04, 18 March 2013 Wolfman (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
14:51, 18 March 2013 Voidious (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
13:24, 18 March 2013 Chase-san (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
09:28, 18 March 2013 Wolfman (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
01:27, 18 March 2013 MN (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
01:20, 18 March 2013 MN (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
09:26, 17 March 2013 Voidious (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
09:18, 17 March 2013 Wolfman (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
00:16, 17 March 2013 MN (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
21:17, 16 March 2013 Voidious (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
21:02, 16 March 2013 Chase-san (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
20:59, 16 March 2013 Skilgannon (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
20:23, 16 March 2013 Wolfman (talk | contribs) New reply created (Reply to Dynamic Clustering - How many matches do you look for?)
20:19, 16 March 2013 Wolfman (talk | contribs) New thread created  

Dynamic Clustering - How many matches do you look for?

From what I understand of dynamic clustering, and the way I am currently looking at implementing mine, you store a history of all stats and which angles you would have hit the target at. Then when choosing your targeting angle you select the top N closest matches to the targets current state, and then select the angle to fire from those top N. My question is, does anyone have a good ballpark figure for the value of N?

If N is too small you might not have enough data about the target to get accuracy. If N is too large you might end up including too much information, polluting your pool with bad matches.

Or, do you not take N all the time, but instead only take matches which satisfy criteria on how good the match is, i.e only matches which are 5% different to the targets current state?

Anyone have opinions on this?

P.s If this is the wrong place to discuss, tell me and I will move it to the correct place! :)

Wolfman20:19, 16 March 2013

Its worth noting that only taking the matches to within 5% might not produce enough matches and will have the same problem as N too small. So you could combine it - select matches to 5%, if not enough, select the top N best of the rest. If you have more than N matches to 5%, then take all of those 5%. Thoughts?

Of course then we would need to start discussing both N and match accuracy % values! ;)

Wolfman20:23, 16 March 2013
 

I take the top sqrt(tree.size()) scans, limited at 100. I think it's a pretty good balance between 'generality' and 'specificity'.

Skilgannon20:59, 16 March 2013
 

I just take the size divided by some number and limit to an upper bounds. In my new gun, this is 14 with a maximum of 80.

Chase21:02, 16 March 2013
 

Right now Diamond's main gun uses max(1, min(225, numDataPoints / 9)). So it scales linearly from 1 at start of round 1 to 225 data points at about 2000 ticks (~3rd round). I've many times evolved these settings from scratch with genetic algorithms and gotten max-k values from 150 to 350 and divisor from 9 to 14 without much change in performance.

It's important to note that I (and I think most of us) also weight the data points by distance when doing kernel density to decide on an actual firing angle, which is why the actual choice of k (er, N in your post) doesn't matter so much.

Btw if you're spending a lot of time in the gun lab, you might like WaveSim.

Voidious21:17, 16 March 2013
 

Combat uses a constant K. And weight data points proportionally to the farthest data point of all K closest data points. It's a kind of variable kernel density.

weight = 1 - distance/(max(allDistances)+1)

K is currently set at 19 for gun (it is this low because it uses only real waves), 17 for wave surfing and 18 for flattener.

Never done any serious tuning except trying a few adjacent K values in RoboResearch and picking the one with highest APS. Some kind of manual hill climbing.

Using only real waves and not doing any fine tuning is the main reason Combat performs badly in APS league, but does ok in PL league.

MN00:16, 17 March 2013
 

Thanks for all the replies, I might implement weighting of points based of distance, something I hadn't considered before.

Voidious: I took a look at WaveSim it looks cool but perhaps im misunderstanding the point, if it is just playing back recorded battles, how can you ever improve your gun against any bots that take into account how often / where you hit them and react accordingly?

Wolfman09:18, 17 March 2013
 

Well... My first answer is you can't, and for tuning against surfers you still need real battles. Most of the rumble is still not very adaptive, so it's good to have a gun that crushes all those bots, so I'm just tuning my "Main Gun" with WaveSim.

That said, not all weaknesses that guns prey on are things that even surfers adapt to very well. Surfers are reluctant to get too close, have preferred distances, and other tendencies even if they try to flatten their profiles. Skilgannon has tuned against pre-gathered data for surfers and supposedly had success with it, though I'm not sure there's enough data to say it really worked that way or if just tuning his gun to weird new settings is what helped.

Voidious09:26, 17 March 2013
 

The huge increase in simulation speed somewhat compensates for not simulating adaptable movements.

And you can try recording a battle, tuning over the static data, then record another battle under new parameters. Iterating many times can make tuning converge to an optimum against adaptable movements, but it is only a theory (never tried).

MN01:20, 18 March 2013
 

There is also the problem of noise when tuning against real battles.

MN01:27, 18 March 2013
 

I tried using WaveSim but having some issues. We try to classify tick N, however we have only been fed tick s up to around 50 less than Tick N, so I cannot get any state from the classify Tick Data about the target bots state at ticks N-1 to N-50. This means I cannot do classification using data like distance "moved last 10 ticks" for instance. Any way to do this?

Wolfman09:28, 18 March 2013

I had that same problem. The suggestion I got was to modify the robot with whatever data you want to record and rerun the battles, and then use that in your classifier.

But I ended up just using the Tick Classifier (or whatever its called).

Chase13:24, 18 March 2013
 

Yeah, WaveSim is really at its best if the data set has all the attributes you use for targeting. Then you can just use the wave classifier and everything's very clean. The TickClassifier gets fed every tick as it happens, so you can use that to supplement the data from the wave classifiers. (Your classifier can implement both.)

It shouldn't be too hard to modify TripHammer to collect different attributes, or to modify your own bot to collect the data. I have a newer/better TripHammer I never got around to releasing, rebased off Diamond after a bit a refactor/rewrite: [1] ... I'd work off that one if you go that route. voidious/gun/TripHammerGunDataManager has all the data writing stuff pretty cleanly separated out.

Voidious14:51, 18 March 2013
 

Wouldn't it be good to store the ScannedRobotEvents that triphammer receives, and pipe that into a scanned function in the WaveSim - as well as the wave data feeds. After all, thats all the data that all robots have to go on so you then have everything you need, and scanned data / waves / classify will come in exactly the same order as trip hammer, allowing all bots to do the WaveSim no matter their configuration.

Wolfman15:04, 18 March 2013
 

Hmm, I think that's a pretty awesome idea in terms of usability, which is probably the main place WaveSim is lacking. I'll definitely look into that if/when I next work on WaveSim.

But another way WaveSim gains speed over real battles is that if you record all the attribute data ahead of time, you don't have to do any complex math (trig, square roots) in your WaveSim test runs to deduce that stuff. So you'd lose that part.

Voidious15:19, 18 March 2013