View source for Talk:Dynamic Clustering

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
Using previous GFs as dimensions2416:27, 12 February 2023
Dynamic Reweighting510:00, 23 December 2013
Density Calculations818:54, 16 December 2013
Storing Data1802:05, 19 March 2013
Dynamic Clustering - How many matches do you look for?1415:19, 18 March 2013

Using previous GFs as dimensions

I tried making a gun which uses kNN classification, in which the one dimension is shots (for data decay) and the others are the last 5 GFs the opponent went to on firing waves. Its very simple right now and I haven't tried much, but it does reasonably well (as in it doesn't get crushed, does better than random targeting) vs both surfers and simple movers. (Haven't tried against random movement yet) It seems similar to pattern matching on the opponents GFs. Any ideas?

Straw (talk)03:01, 20 December 2013

can you show the robowiki the gun

Tmservo (talk)03:33, 20 December 2013
 

Interesting. I think you should definitely test against RM, because surfers generally don't like anything with bullet waves, and simple movers are pretty easy to hit. Also, you might want to start with the basics (distance, velocity, relative heading, acceleration...) before you get into data decay and experimental dimensions. Good luck.

Sheldor (talk)04:42, 20 December 2013

Don't worry! I am actively optimizing my more "normal" gun using standard predictors. What bots would you recommend as good test bots with strong RM? Tmservo its just a 6 dimensional KNN, 5 are lastGFs normalized to 0-1, last one is sqrt(shots)* .5. I haven't added any kernel density stuff to this tree, I plan to try that.

Straw (talk)06:35, 20 December 2013

just drussgt, diamond, gilgalad, tomcat, xandercat, waveserpent, nene, gressuffurd,

Tmservo (talk)13:21, 20 December 2013

Those are not bots with Random Movement as Straw asked for.

Rednaxela (talk)14:45, 20 December 2013
 

I'd suggest you use the reference bots from the Targeting Challenge RM, because they give you a good variety of random movements, and have been thoroughly tested against before, so the 'expected' result is fairly accurately known. You may wish to use RoboRunner to automate testing against that set as well.

Rednaxela (talk)14:47, 20 December 2013
 

Yeah, like Rednaxela said, the reference bots would work well. I'm curious, are you planning to have these two guns in a VG array, or are you just experimenting in parallel?

Sheldor (talk)15:30, 20 December 2013

Not sure yet, could use a VG array, or might get results from two trees and do density calculations on the results from both.

Straw (talk)03:11, 21 December 2013
 
 
 

Seems like a neat idea. I'm pretty sure I recall prior GFs being used in kNN targeting in the past (I think I may have tried that a little), but never (to my knowledge) going to the extreme of 5 prior GFs.

Rednaxela (talk)15:10, 20 December 2013
 

Works well against active flatteners. Bad against the rest. Anti-flattener classifier.

MN (talk)16:38, 20 December 2013

Well, I'd imagine it might work okay against oscilator, stop-n-go, and CircleBot movements... but out of strong movements, yeah, I'd mostly expect flatteners indeed.

Rednaxela (talk)16:52, 20 December 2013
 

If this does well vs flatteners, it would fit my strategy perfectly as I am trying to create anti surfer and flattener guns independently first. Another idea closer to pattern matching would be using the past 10 GFs of all waves, including non firing waves every tick. I am interested in the idea of looking at changes in GF between waves instead of absolute GF as Rednaxala is suggesting.

Straw (talk)19:59, 20 December 2013
 

I'm curious how you're structuring this: Do you condense these 5 GFs into one attribute, or are they 5 independent attributes?

The most practical example this reminds me of is I believe Skilgannon uses his own last GF as an input to his gun to help crushing mirror movement. Using the enemy's previous movements I would guess to behave similarly to attributes like displacement distance last X ticks or time since velocity change.

My only other comments is that 5 seems like a lot. At an interval of 14 ticks, that's 70 ticks into the past, and more if you count the time it took to reach that first GF. My "distance last X" experiments have never gone past ~40 ticks, IIRC, and even that was not a strong signal.

Voidious (talk)18:07, 20 December 2013

Straw already said that there are six dimensions in his(?) gun, one for decay, and one for each of the enemy's last five visited GFs.

Sheldor (talk)18:25, 20 December 2013

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:Dynamic Clustering/Using previous GFs as dimensions/reply (13).

If one wanted to make the ones you mention be treated similar, the way to do it would be to make the dimensions either:

GF[i], (GF[i] - GF[i-1]), (GF[i-1] - GF[i-2]), (GF[i-2] - GF[i-3]), (GF[i-3] - GF[i-4])

or perhaps:

GF[i], (GF[i] - GF[i-1]), (GF[i] - GF[i-2]), (GF[i] - GF[i-3]), (GF[i] - GF[i-4])

(where index 'i' is the most recent wave, 'i-1' is the second most recent wave, etc)

Of course, whether that would give better results than all independantly I have no idea about... and which of those ways of making it relative would be better I'm not sure about either.

Rednaxela (talk)18:49, 20 December 2013
 

I appreciate the concerned ? with his. I am male though.

Straw (talk)20:04, 20 December 2013
 

Doesn't DrussGT use the opponents last GF in his targeting system, not his own? (in addition, a dimension called my expected mirror rotation at bullet hit time) It looks so far into the past because in both surfers and flatteners, (especially surfers) recent past movements are a good indicator of present movements. Fast decaying "normal" anti surfer guns do a similar thing in a different way.

Straw (talk)20:02, 20 December 2013
 

Yes I do something similar in DrussGT, in two places.

1. I use the enemy's current GF as a gun attribute to help with oscillators

2. I use my predicted GF one BFT in the future as in input to help against mirror bots (and it helps a little against most other bots as well)


I'm also curious about the last 5 GFs - are they sorted and fed to the gun in ascending order, or are they used in the order they were collected? I feel that some distance/clustering method other than KNN might work best in this situation.

Skilgannon (talk)20:11, 20 December 2013

I agree that something other than KNN might be better, but its what I have set up right now. No sorting happens, the gun just has a dimension for the last GF, a dimension for the GF before that, etc. I found increased performance dropping data decay, going to 10 past GFs, and using descending weights on older GFs. Data decay shouldn't be necessary with this setup as if the opponent changes their movement patterns, they will change their GF sequences.

Straw (talk)03:18, 21 December 2013
 

Exiting anti flattener results! The gun gets pretty much the same hitrate vs DrussGT (which can't shoot) with DrussGTs flattener allowed to turn on or forced off. Around 10.6% weighted from DrussGT's console. For comparison, Diamond's gun gets around 12.6%. The interesting thing is that the hitrate didn't go down at all with the flattener.

Straw (talk)03:42, 21 December 2013

When I increase K in my GFs as dimensions gun, performance improves. Interesting.

Straw (talk)22:03, 22 December 2013

by exactly how much per k

Tmservo (talk)23:00, 22 December 2013
 
 
 
 
 

Dynamic Reweighting

Having optimized weights appears to be very important, especially the weight on old vs new data. However, these optimal weights may not be the same against all opposing bots, particularly the case of hitting adaptive vs non adaptive movement. If your kD-Tree provides dynamic weighting functionality, you could theoretically have good weightings vs opponents after a few rounds. (See my post in "Hard-coded segmentation") The difficulty is thus finding the optimal weights for hitting a given opponent in short learning period. One approach would be to change your weights after a missed wave in a similar manner to training neural networks. When a wave hits, and your bullet misses, in non firing ticks (during which you have extra calculation time) do kNN searches to see if the weight on a particular predictor would have caused a better (closer to correct GF, or perhaps DV) prediction if it was higher or lower. Then, look at the results the past several calculations and look for trends, to avoid training on noise.(This is somewhat similar to what Gaff's gun does, it stores results from the past 5 missed waves and trains the neural network with all of them on a miss(If I remember correctly...)) This seems like it would tend to converge on better weights, which might help fight opponents with adaptive movement. It also would partially alleviate the curse of dimensionality on kNN classification, by lowering the weights of unimportant predictors (against a particular opponent) to close to zero, essentially removing them. Does anyone else have ideas for optimizing weights during combat?

Straw (talk)07:29, 13 December 2013

In a gun, the easiest way would be having many sets of weights and put them all in a virtual gun array. But if all the sets perform similar to each other, the virtual gun will not be able to pick the optimal one due to noise.

And flatteners actively try to screw up most statistics, including virtual gun scores.

MN (talk)15:37, 13 December 2013

For tuning weights in a gun, I'd strongly recommend testing things offline with WaveSim or similar. It's a couple orders of magnitude faster to test against pre-gathered data.

Also, I've done quite a bit of tuning of gun weights this way. The gun weights in Diamond were evolved through a genetic algorithm with WaveSim. But honestly it barely outperforms the hand-tuned weights I had before that. I'm fairly skeptical that it's worth ever straying from "generally optimal" in order to eventually get "bot-specific optimal". That is, I think a gun with generally optimal weights tuned over 10,000 matches will beat bot-specific optimal tuned over <= 35 rounds.

Worth noting that all the tuning I've done was specifically on attribute weights though, not decay rate or other factors. My main gun doesn't decay, just my Anti-Surfer gun. (And you may not want to tune an AS gun against pre-gathered data, though Skilgannon has.)

Voidious (talk)17:32, 13 December 2013

Based on information from you and others who say that weights on predictors other than data decay have very little effect on performance, the method I described might only be useful for adjusting your data decay in an anti surfer gun. However, it might also allow you to add more specialized predictors which started at weights of zero and were only used if found to be relevant. For example, what if you found that looking at some statistic of the past 5 GFs a bot goes to helps against bots using flatteners, but not at all against anything else. Adding this to a statically weighted gun would probably decrease performance against everyone but opponents using flatteners. I might use WaveSim when I'm working on an anti non adaptive targeting system.

Straw (talk)17:43, 13 December 2013
 
 

I notice that Wintermute prints out "move weights" to the console, and that they change. Is it changing kNN weights?

Straw (talk)02:24, 23 December 2013

IIRC, there are several different KNN schemes, and it changes the weight that each of the results are valued at. So one for simple guns, one for intermediate, one for more complex guns, and a flattener scheme. Then these are evaluated and chosen depending on how well their contributions predicted the enemy gun.

Skilgannon (talk)10:00, 23 December 2013
 
 

Density Calculations

I have been working on a pure anti surfer gun (Not for use against flatteners!) and I am having some trouble implementing firing angle density calculations. I've mostly been testing vs DrussGT with flattener and shooting disabled. Right now I just fire at the nearest neighbor to the current situation, because every time I try to look at more neighbors and use some highest density calculation, it significantly reduces my hitrate. I've tried doing simple things like counting points near each point, and some more advanced stuff with gaussian rolloff for both distance (in kNN) and how near the point is on the GF scale [-1,1]. Help?

Straw (talk)05:05, 15 December 2013

Surfers are designed to screw up highest density calculation on purpose.

You are better off trying to shoot everywhere, except where they were in the past and were hit by your bullets (the highest peak). Peaks which formed only after your last bullet hit still works though, and the easiest way to do this is using data decay. Using less neighbors has the same effect to a lesser extent.

MN (talk)15:28, 15 December 2013

Indeed, I use a pretty low K in my anti-surfer gun. Higher than 1 though.

Also, make sure you're running enough battles to get an accurate measurement. It might be more battles than you think. More than a few bad Robocode conclusions have come from not gathering enough data.

Voidious (talk)17:40, 15 December 2013

How low is your k? I've only run a few battles, but when my hitrate goes under 8% vs a bot which can't shoot and can't flatten, and normally its above 12%, it makes me think there is something wrong. Ill run a few more to make sure.

Straw (talk)20:33, 15 December 2013

My Anti-Surfer gun uses 4 data sets. They are k=3 of last {125 | 400 | 1500 | 4000} data points. So the least-decaying one has ~5 rounds of data, and many of the selected data points will overlap.

8% vs 12% of raw hit rate over a few battles doesn't could still be just fluctuation. If it's a normalized hit rate that might be significant... I generally run a few hundred to a few thousand battles to test a change, for what it's worth.

Voidious (talk)20:53, 15 December 2013

Its normalized calculated by DrussGT. I just did 15 battles, it seems unlikely due to random fluctuation. Is your anti-surfer gun a composite of multiple other guns? How does it work?

Straw (talk)21:10, 15 December 2013
 
 
 
 
 

Storing Data

Do you store all your data between matches for each robot you encounter? Or do you start with a fresh tree each time? Store a subset of the data that will give you a good start? I assume storing the full data set will be too large to be able to fit it in the bots data directory? But it would be good to store some of your tree so that you don't have to re-learn every time? I couldn't see details on how many rounds in a row the Roborumble ran for each bot match. Is it enough that you have time to populate the tree with useful data?

Wolfman12:42, 18 March 2013

I don't save data between battles. It's probably worth a few RR points but so inconsistent based on how your battles get distributed, I just avoid the headache.

Voidious13:39, 18 March 2013
 

Does the robocode security disable network access? I had an idea of storing data in a cloud server and then loading that data when a robot loads, meaning your robot could store data for all roborumble results and share it between distributed runs! ;) But I'm guessing thats not possible?

Its just interesting because my gun definitely gets better the larger the KD-Tree becomes. Its worth a good few percent between 100 rounds and 500 rounds of data for instance. I guess I'll have to tune it for 35 rounds :(

Wolfman13:43, 18 March 2013
 

Yep, network access is disabled. Sharing saved data across clients has always seemed like a potentially cool RoboRumble feature, but personally I'm more in the "get rid of all saved data" camp. :-) It's helpful to log exceptions for debugging, though.

Voidious13:46, 18 March 2013
 

Ok fair enough. So what is the number of rounds that the rumble runs each match? Is it 35 like the latest targeting challenges?

Wolfman13:51, 18 March 2013
 

Yep, 35.

Voidious13:56, 18 March 2013

The roborumble (800X600) and meleerumble (1000X1000) are 35 rounds, the teamrumble (1200X1200) is 10 rounds, and the twinduelrumble (800X800) is 75 rounds (with all forms of data-saving between matches banned).

Sheldor14:55, 18 March 2013
 

And not trying to discourage you from saving data - if it's interesting to you and there's points to be had, go for it!

What I like about not having it is that it makes for a more clearly defined problem, and you don't run the risk of real improvement getting hidden by fluctuations in the performance of your data saving.

Voidious14:39, 18 March 2013
 

I guess what is interesting to me is the identification of patterns in large data sets that saving would give you. Whereas not saving data becomes either how do I gather data fast enough to be useful or how do I perform well with small data sets. For instance I would assume that wave surfers would perform better after more rounds.

Do you pre-populate any of your data at the start of a match even if you don't save data about specific robots?

Wolfman15:21, 18 March 2013
 

I have two things that are a form of pre-populated data:

  • In surfing, I surf as if there's one hit at GF=0 until I have any data.
  • In the gun, I have this silly RetroGirl/Gun that I use for the early ticks of round 1 before my real gun has data. (The real gun quickly outperforms it with even a tiny amount of data.)
Voidious15:28, 18 March 2013
 

Yup I think I may have to investigate switching guns depending on the amount of data in my dataset to improve performance in early rounds (or in round 1). At the moment I only am using a DC gun.

Wolfman17:01, 18 March 2013

What other kind of gun would you be using?

Sheldor18:01, 18 March 2013
 

I dont know, but I have a hunch that a preloaded GF simple segmented GF gun may be more accurate for the first few shots compared to an almost-empty DC gun *shrug*. My gun at the moment has a noticeable increase in accuracy between 35 rounds to 100. :/

Are there any tricks to make a DC gun better in the first few rounds that I am missing?

Wolfman19:14, 18 March 2013
 

Pre-loading of any sort I'll give you, but I consider fast learning one of the strengths of DC over VCS. As you get more data, it automatically tightens its bounds and uses more relevant data. To achieve the same with VCS you need to layer buffers of different complexities or dynamically segment.

Circular targeting or RetroGirl/Gun certainly outperform a DC gun with no data, and I am crazy enough to have such a gun just for that purpose... But from what I recall, it's like 2-5 data points (like 50 ticks into round 1) where my DC gun starts outperforming them.

Voidious19:22, 18 March 2013
 

Wolfman,
I would say, don't worry about the first round so much. I don't know much about megas, but I don't think that missing two shots due to a lack of data would do any noticeable damage to your score.

Sheldor21:51, 18 March 2013

Probably true it's something to save for later, but there's good reason to put extra emphasis on those early ticks/rounds. There are a lot of mid-range bots that have a chance at taking a round from (e.g.) Diamond early on, even if Diamond will reliably crush them every round after round 5. That 1 round is big in percent score. Every shot counts! :-)

Voidious22:07, 18 March 2013
 

Between versions 1.7.2 and 1.7.4, Diamond gained .15% APS, 4.2 Glicko, 1 PL, and .06% Survival.

Sheldor22:32, 18 March 2013
 

<Squinting Fry> Can't tell if agreeing or disagreeing with me... :-)

Totally agree that improving the accuracy of the first few shots is only a tiny (but measurable) improvement to Diamond's already state of the art gun. That it's measurable only lends credence to the notion that Wolfman is right to put some emphasis on performance in the first few rounds (not necessarily ticks) in his still early in development gun.

Voidious22:51, 18 March 2013
 

You added a very powerful pre-loaded gun to an extremely powerful learning gun, and you got an improvement of .15% APS. That's significant when you're grappling for the throne, but I don't really think that it's worth Wolfman's time.

If I were you, Wolfman, I would focus on making a gun that works extremely well in the last 34 rounds, before worrying about squeezing every last point out of the first.

Sheldor00:01, 19 March 2013
 
 
 

Dynamic Clustering - How many matches do you look for?

From what I understand of dynamic clustering, and the way I am currently looking at implementing mine, you store a history of all stats and which angles you would have hit the target at. Then when choosing your targeting angle you select the top N closest matches to the targets current state, and then select the angle to fire from those top N. My question is, does anyone have a good ballpark figure for the value of N?

If N is too small you might not have enough data about the target to get accuracy. If N is too large you might end up including too much information, polluting your pool with bad matches.

Or, do you not take N all the time, but instead only take matches which satisfy criteria on how good the match is, i.e only matches which are 5% different to the targets current state?

Anyone have opinions on this?

P.s If this is the wrong place to discuss, tell me and I will move it to the correct place! :)

Wolfman20:19, 16 March 2013

Its worth noting that only taking the matches to within 5% might not produce enough matches and will have the same problem as N too small. So you could combine it - select matches to 5%, if not enough, select the top N best of the rest. If you have more than N matches to 5%, then take all of those 5%. Thoughts?

Of course then we would need to start discussing both N and match accuracy % values! ;)

Wolfman20:23, 16 March 2013
 

I take the top sqrt(tree.size()) scans, limited at 100. I think it's a pretty good balance between 'generality' and 'specificity'.

Skilgannon20:59, 16 March 2013
 

I just take the size divided by some number and limit to an upper bounds. In my new gun, this is 14 with a maximum of 80.

Chase21:02, 16 March 2013
 

Right now Diamond's main gun uses max(1, min(225, numDataPoints / 9)). So it scales linearly from 1 at start of round 1 to 225 data points at about 2000 ticks (~3rd round). I've many times evolved these settings from scratch with genetic algorithms and gotten max-k values from 150 to 350 and divisor from 9 to 14 without much change in performance.

It's important to note that I (and I think most of us) also weight the data points by distance when doing kernel density to decide on an actual firing angle, which is why the actual choice of k (er, N in your post) doesn't matter so much.

Btw if you're spending a lot of time in the gun lab, you might like WaveSim.

Voidious21:17, 16 March 2013
 

Combat uses a constant K. And weight data points proportionally to the farthest data point of all K closest data points. It's a kind of variable kernel density.

weight = 1 - distance/(max(allDistances)+1)

K is currently set at 19 for gun (it is this low because it uses only real waves), 17 for wave surfing and 18 for flattener.

Never done any serious tuning except trying a few adjacent K values in RoboResearch and picking the one with highest APS. Some kind of manual hill climbing.

Using only real waves and not doing any fine tuning is the main reason Combat performs badly in APS league, but does ok in PL league.

MN00:16, 17 March 2013
 

Thanks for all the replies, I might implement weighting of points based of distance, something I hadn't considered before.

Voidious: I took a look at WaveSim it looks cool but perhaps im misunderstanding the point, if it is just playing back recorded battles, how can you ever improve your gun against any bots that take into account how often / where you hit them and react accordingly?

Wolfman09:18, 17 March 2013
 

Well... My first answer is you can't, and for tuning against surfers you still need real battles. Most of the rumble is still not very adaptive, so it's good to have a gun that crushes all those bots, so I'm just tuning my "Main Gun" with WaveSim.

That said, not all weaknesses that guns prey on are things that even surfers adapt to very well. Surfers are reluctant to get too close, have preferred distances, and other tendencies even if they try to flatten their profiles. Skilgannon has tuned against pre-gathered data for surfers and supposedly had success with it, though I'm not sure there's enough data to say it really worked that way or if just tuning his gun to weird new settings is what helped.

Voidious09:26, 17 March 2013
 

The huge increase in simulation speed somewhat compensates for not simulating adaptable movements.

And you can try recording a battle, tuning over the static data, then record another battle under new parameters. Iterating many times can make tuning converge to an optimum against adaptable movements, but it is only a theory (never tried).

MN01:20, 18 March 2013
 

There is also the problem of noise when tuning against real battles.

MN01:27, 18 March 2013
 

I tried using WaveSim but having some issues. We try to classify tick N, however we have only been fed tick s up to around 50 less than Tick N, so I cannot get any state from the classify Tick Data about the target bots state at ticks N-1 to N-50. This means I cannot do classification using data like distance "moved last 10 ticks" for instance. Any way to do this?

Wolfman09:28, 18 March 2013

I had that same problem. The suggestion I got was to modify the robot with whatever data you want to record and rerun the battles, and then use that in your classifier.

But I ended up just using the Tick Classifier (or whatever its called).

Chase13:24, 18 March 2013
 

Yeah, WaveSim is really at its best if the data set has all the attributes you use for targeting. Then you can just use the wave classifier and everything's very clean. The TickClassifier gets fed every tick as it happens, so you can use that to supplement the data from the wave classifiers. (Your classifier can implement both.)

It shouldn't be too hard to modify TripHammer to collect different attributes, or to modify your own bot to collect the data. I have a newer/better TripHammer I never got around to releasing, rebased off Diamond after a bit a refactor/rewrite: [1] ... I'd work off that one if you go that route. voidious/gun/TripHammerGunDataManager has all the data writing stuff pretty cleanly separated out.

Voidious14:51, 18 March 2013
 

Wouldn't it be good to store the ScannedRobotEvents that triphammer receives, and pipe that into a scanned function in the WaveSim - as well as the wave data feeds. After all, thats all the data that all robots have to go on so you then have everything you need, and scanned data / waves / classify will come in exactly the same order as trip hammer, allowing all bots to do the WaveSim no matter their configuration.

Wolfman15:04, 18 March 2013
 

Hmm, I think that's a pretty awesome idea in terms of usability, which is probably the main place WaveSim is lacking. I'll definitely look into that if/when I next work on WaveSim.

But another way WaveSim gains speed over real battles is that if you record all the attribute data ahead of time, you don't have to do any complex math (trig, square roots) in your WaveSim test runs to deduce that stuff. So you'd lose that part.

Voidious15:19, 18 March 2013