how to build a good test bed?

Jump to navigation Jump to search

It depends what I am working in.

For movement, often a single bot is enough to prove a theory. Escape angle tuning is a rambot plus DevilFish, surfing mechanics is DoctorBob, anti-GF RaikoMicro, anti-fast-learning is Ascendant and for general unpredictability Shadow or Diamond.

Targeting I always find less interesting. Maybe because it is a more pure ML problem, with less ways to optimise that haven't already been studied in a related field. I decided to brute-force it by adding lots of features and then using a genetic optimization to tune the weights against recordings of the entire rumble population, about 5000 battles. The surfers I did separately, but with the same process.

Skilgannon (talk)22:14, 27 September 2017

WoW Thanks for the sharing! In the past I only tune the movement agaisnt RaikoMicro by roborunner & carefully wathcing battles and that way works very well. Recently I tried some more brute force way but it seems not working. Maybe for an undeveloped ML area, some idea or theory is more useful.

recordings of the entire population — I’m wondering will it be useful to tune agaisnt wave surfers, which react to fire, in a way that their reaction is irrelevant?

Or can we just treat wavesurfers as some random movement that is not random enough? And with so many attributes, their reaction on fire will be inaccurate enough to be ignored and just proper decay is enough?

BTW, I’m really curious about how long it takes for a generation ;) And how many threads you are using to run it ;)

Xor (talk)00:40, 28 September 2017

Movement I find much more interesting - I think there is still a lot of unexplored potential here. Targeting can only get as good as the ML system though. The only tricks I see from targeting side involve bullet shielding and bullet power optimization.

For surfers I evolved the weights in multiple steps - record data, tune weights, re-record data, retune weights etc. I agree fixed data isn't ideal against learning movements, but it seemed to work ok.

By recorded battles, I actually just recorded the ML style interactions. So the only work to do in the genetic algorithm was parse input line, add to tree, and if it was a firing tick then do KNN + kernel density and N ticks later check if the prediction was in the correct bounds.

About 15 minutes per generation for an i5-2410M using 4 threads.

Skilgannon (talk)07:25, 28 September 2017

So only record gun waves seems ok? And IMO the gun prediction of each wave can be evaluated immediately, since the result is already known. btw, are you optimizing hit rate overall (e.g. total hit / total fire of all battles) or robocode score? (e.g. average bullet damage per battle). I think the lattar should be better when bullet power selection is also evaluated (or when it is not disabled). But since in real battles hit/miss will also affect total waves per round, that would be inaccurate for recorded battles. So how do you deal with bullet power? imo using the recorded ones sound reasonable, although not perfect.

The difference between evaluating overall hit rate and average bullet damage per battle is interesting. Seems that the latter will weight on damage per bullet. Also when comparing average hit rate per battle with overall hitrate, the former will weight battles on bullets fired per battle.

Xor (talk)08:57, 28 September 2017

I optimized for hit rate. Bullet power was kept the same as when it was recorded.

And I saved/loaded all waves (for learning), but only did prediction using firing waves.

Skilgannon (talk)10:17, 28 September 2017
 

So... each of those generations was evolved against those 5000 battles, right? What was the size of your population? I've tried my hands at genetic tuning some time ago but I gave up because it seems my evolving step was too slow. I'm wondering what was your population size when you got those 15 minutes, because one generation with 150 battles for me take waay more than that :/ I'll need some reference to optimize my targeting system.

Rsalesc (talk)22:30, 3 October 2017

From memory, population size was about 20. It was something between a gradient descent and a genetic algorithm, by moving from the stronger members away from the weaker members, plus some random component. Remember, I had already extracted all of the features etc, and saved them just before inserting into the Kd-Tree, so the only thing I needed at evaluation time was:

  1. read data from file
  2. add points to the tree
  3. KNN/KDE
  4. count inliers vs outliers -> give a score

Then at the end multiply the evolved weights with the code weights, recompile, and collect a new set of data; repeat until happy.

Skilgannon (talk)22:56, 3 October 2017

Just to clarify, maybe not in the classic way, but your algo is more a elitist than a mutator, is that it?

Rsalesc (talk)23:07, 3 October 2017
 

I'm doing exactly that, just ran a generation of pop. size 30 on top of 155 battles from top bots of the Rumble and it took me about 15 minutes. I'll debug what is taking this much time later. Thanks for your help :)

Rsalesc (talk)23:29, 3 October 2017

Make sure you are only simulating aiming on waves that you actually fired.

Skilgannon (talk)11:10, 12 October 2017
 

So are you using something similar to PBIL instead of traditional GAs?

Xor (talk)03:33, 4 October 2017

Cool, I have to put my hands on some ML texts :)

Rsalesc (talk)12:34, 4 October 2017
 

I think the closest would be something between gradient descent and stochastic learning.

Skilgannon (talk)11:09, 12 October 2017

So stochastic gradient descent?

Xor (talk)16:05, 12 October 2017

Still not quite, because it uses a population like GA does, and used linear combinations between the population to estimate gradient similarly to how gradient descent would. Honestly, there were probably better/faster algorithms that would have worked better out-the-box, but this worked fine, it just took a bit longer.

Skilgannon (talk)17:32, 12 October 2017

Well, this combination sounds great, and it is more like how I'm tuning weights by hand than traditional GAs. And this way it should work way better than hand, as it's running way more battles with way more population.

And it's way faster (and also with less deviation) with recorded battles. The only problem is overfitting the recorded battles, but that should be solved well with many tune–rerecord iterations.

Anyway, I'm still wondering about — will it forget the previous tune–rerecord iterations to overfit new iterations? Anyway, since it sounds more like metric learning, it won't surprise me if this one is different. Did you experiment rerunning the old battles after tuning for newer ones to see that?

Xor (talk)02:49, 13 October 2017
 
 
 
 
 
 

I’m doing nearly the same thing now. I write knn data points and gfs to files, so all I do is just:

read data from file; add to tree; knn/kde; count inliers vs outliers. and I’m only doing knn/kde on firing waves.

However it takes me ~10min per generation with only 1500 tcrm battles.

My population size is also 20, and I’m also using 4 threads. It’s Core i7 with 4 cores at 2.6Ghz, so it should be even faster than i5-2410M which has only 2 cores.

Are you reading data and adding to tree at the same time, or reading data to memory in one go and adding to tree then?

Xor (talk)02:34, 31 May 2019

It was read a line, add to tree, and if it was a firing tick do a prediction. For parallelization I just started a new thread for each bot, and join the thread when the bot is processed. It would probably br a bit faster with a thread pool.

Unfortunately I think I lost this code, I think it was on my University computer...

Skilgannon (talk)12:09, 5 June 2019

I'm even using thread pool & nio for potentially faster execution. Maybe 5000 roborumble battles should not take 3x time as 1500 tcrm battles, as the rumble contains a lot of easy targets which get destroyed in a second. I'll experiment later.

Btw, my crossover code is not simply doing some gradient descent, but rather do gradient descent or use the weight from one parent directly, based on some random process. Random noise is also added on small probability though. I think this process explores more possible searching space than simply gradient descent + random component. As my experience, the searching space of knn weights is non-trivial, although some pattern exists for most good weights.

Xor (talk)07:16, 6 June 2019
 

one more question: how many generations are you generally using?

For me, 10 generations produces result good enough, and increasing it further to 100 doesn’t improve much.

However, it seems that 1500 tcrm battles suffers from overfitting a lot, and I’m trying full rumble now.

Each time I collect data & do genetic tuning with 1500 tcrm battles, the hit rate increases from ~16% to ~17%, however actual tcrm score even decreases sometimes.

Xor (talk)10:59, 14 June 2019

It depended on the population size and the sampling strategy I used. If I used larger population and less converging sampling strategy then I could run up to about 100 generations before it would converge.

And I think the solution space is very non-convex with lots of local minima, I ran quite a few simulations and it converged to different solutions each time.

Skilgannon (talk)17:11, 16 June 2019
 
 
 
 
 

Holly smoke! Using the whole rumble for tune up. It probably takes half a day to have one generation in a genetic algorithm.

Beaming (talk)01:33, 28 September 2017

5000 battles on the fly takes me ~4hrs iirc. But recorded battles should take shorter imo.

Xor (talk)02:18, 28 September 2017

What is recorded battles?

Beaming (talk)03:21, 28 September 2017

e.g. WaveSim by voidious

Xor (talk)06:16, 28 September 2017