Talk:WaveSim

From Robowiki
Revision as of 14:35, 26 January 2011 by Voidious (talk | contribs) (→‎Compression?: reply)
Jump to navigation Jump to search

Still ironing out some issues here and there, but damn this is cool. =) Time to run normal battles, 20 seasons x 48 bots: 4.75 hours (on 2 threads). Time to run the same gun against the raw data: ~10 minutes. :-D Plus you don't have to hope randomness averages out over the battles - it's the same data every time. --Voidious 23:37, 13 March 2010 (UTC)

Neat stuff here! Actually, back when working on RougeDC, I once had something akin to this set up for quick testing, but I never really used it extensively or made it robust. I wonder if I should set up a robust framework for this for my future targeting experiments. --Rednaxela 23:52, 13 March 2010 (UTC)

I actually wondered if you ever had. =) It's a funny combination of "wow this is so cool!" and "you know this is sooo nothing special." Back when I had access to MATLAB at school, I did play with a wave data set with some SVMs, but other than that I haven't explored testing my classification algorithms outside of Robocode. But I still have the desire to try a lot of clustering experiments, so taking a few days to set this up was well worth it! --Voidious 23:59, 13 March 2010 (UTC)

This has got me thinking. Since the earliest days of Ugluk, the design of the guns and movement have been 'pluggable'. Which is handy because I'd often throw a large set of both against opponents and simply stop using the ones that were least effective. Anyway.. digressing too much.. what I have not yet done is to make the tank completely independent of Robocode, such that with the right input you could run a simulation outside of the client. I can see the benefit of doing this with a recorded set of tank positions, directions, and speeds. Even putting aside the nagging problem of adaptive movements, you can quickly tell if your gun has gone horribly wrong. And of course when testing against non-adaptive movements, you can refine your punishment to squeeze the best point ratios out of your battles, which is what the scoring in the rumble is all about. Defeating good / adaptive bots is secondary. --Martin 21:11, 15 March 2010 (UTC)

QT Clustering sounds interesting. Reminds me of my density suggestion, except without the normal distribution. I wonder if there is a way to dynamically determine the best threshold as well. I would guess 'the point where the density of point to distance becomes less than those nearer to the center' but that is a bit abstract and is useless for building clusters, since not all points or for that matter clusters (with real data) fit this kind of definition. --Chase 10:33, 20 March 2010 (UTC)

My recent gun tests have been against a field of 47 mid-range bots I had in an old RoboResearch test bed. Last night before bed, I took 5 minutes and used BedMaker to create a test bed of 250 bots that Diamond scores between 70% and 90% against, then started collecting 6 seasons of gun data (1500 battles) against them with TripHammer RES. I felt so cool! =) --Voidious 16:15, 25 January 2011 (UTC)

Compression?

Hmm, since you end up with huge CSV files with this, why not do some compression with GZIPOutputStream? Not only would it save disk space, but I have a feeling it could make WaveSim run faster due to reduced reading from disk. --Rednaxela 06:18, 26 January 2011 (UTC)

Good call, trying it now with some existing data. First run was 889 uncompressed, 904 gzip - hmm. Still worth keeping for the size, but no speed increase. Started to think just last night how unoptimized this code might be. I added regular MEA and corresponding regular GFs last night and then wrote a method to massage my existing data files into the new format. It was taking forever! Turns out a zillion string appends in a tight loop is bad, and StringBuilder is awesome.

Now I'm wondering if there are some magical incantations for reading these files better. I was using BufferedReader(FileReader()), now it's BufferedReader(InputStreamReader(GZIPInputStreamReader(FileInputStream)))). I tried (when still uncompressed) FileInputStream and reading in the whole file at once, then parsing it, but that was slower.

--Voidious 13:35, 26 January 2011 (UTC)

You cannot post new threads to this discussion page because it has been protected from new threads, or you do not currently have permission to edit.

Contents

Thread titleRepliesLast modified
TripHammer logging duplicated EndRoundRecord017:39, 12 February 2018
WaveSim Unavailable 011:50, 4 February 2017
Test against algorithm instead of data018:08, 13 December 2013
Genetic tuning613:23, 19 January 2013
Thanks again!518:15, 13 January 2013

TripHammer logging duplicated EndRoundRecord

Thanks for opening source TripHammer which is a very useful tool for recording waves! However, after analyzing data recorded by TripHammer, I found a bug in its implementation.

TripHammer is logging EndRoundRecord when it receives onWin or onDeath, however, both may fire at the same round (e.g. you killed the opponent, and later you are killed as well), which causes EndRoundRecord getting logged twice.

But what affects more is that in rare cases, no EndRoundRecord is logged, making the battle data "34" rounds long.

Btw, there seems a typo in "TRADITIONAL_WAVE_END_SIGNATURE" where the second "I" is missing in the source ;)

Xor (talk)17:39, 12 February 2018

WaveSim Unavailable

Hi, I was trying to do something like WaveSim before I found it but I think that WaveSim is better. I tried to download it but dijitari.com is closed. Can you please do something about it?

Dsekercioglu (talk)11:50, 4 February 2017

Test against algorithm instead of data

Was just thinking it might be cool to add support for having a movement algorithm decide whether shots hit or not, instead of pre-gathered data. You could create a simple pseudo-surfer that runs quite fast, I think. Maybe I could also write a system to generate an algorithm that behaves like an already gathered data set.

WaveSim could also stand to be a lot easier to use. Thinking I might get it up on GitHub and touch it up a little sometime.

Voidious (talk)18:08, 13 December 2013

Genetic tuning

WaveSim looks great to genetic tune your gun classification. And would probably be the biggest improvement to Combat.

But what about movement and energy management? I wonder how people here tune them.

MN02:43, 16 January 2013

I've never done any genetic tuning outside of WaveSim for gun classification, but I've thought about it. At first it seems like I'd want to be able to, from Java code, modify some other Java code and package it into a Robocode bot, copy that bot into my RoboRunner robots dir, then run battles. That always seemed really cumbersome to me.

But I think a much simpler approach is to just have one packaged bot that you always run, it reads in the DNA string from a file and interprets it however you like, then your genetic algorithm code just modifies the input file and runs more battles. This is the approach I'd take. Once it's done, you can convert it to real hard-coded behavior if you like. Especially if you're tuning a MiniBot (or smaller), you'd probably want to remove the file reading code.

Voidious02:49, 16 January 2013

I tried this approach twice. Genetic tuning generated a chromosome.properties file in the data directory, which was read by the bot during a battle. It works.

The challenge is working around the noise and slowness of full battles. There are techniques for dealing with noisy and slow fitness functions, and I tried some of them.

The naive approach would be running like 15 to 50 battles per chromosome and averaging the results, taking days to evaluate a single generation.

Another approach is running a single battle for each chromosome, them averaging the results for many near chromosomes (k-NN style). Assuming similar chromosomes give similar results, averaging them can suppress the noise while still giving a meaningful fitness evaluation. It took about 1 hour per generation. The technique is called "averaging over space".

Since the fitness landscape keeps changing due to more neighbours being added all the time, another technique called "random immigrants" is useful to avoid over-fitting. The technique is basically replacing a few chromosomes (the worst ones) with new random ones.

Also tried running a single battle per chromosome without averaging, but it was too unstable. Survival of the fittest became survival of the luckiest.

Another challenge is paralellizing fitness evaluation. JGAP's built-in parallelization is awful for the techniques above, so I built another one from scratch. Pick all chromosomes from a bulk fitness function, evaluate them in parallel (RoboRunner style), regroup/average all results and return.

Tried with Combat, tuning gun/movement/energy management against DrussGT alone, but failed due to over-specialization. Also tried with Impact, tuning energy management against the Rambot Challenge 2K6, this one worked ok.

WaveSim works around the problems above by using a fixed data set and a targeting simulation.

MN01:18, 17 January 2013
 

Unfortunately genetic tuning is very long and tedious if you do it with a whole robot. Genetics also happens to cheat, so you need a good sample size to test against. I have genetically evolved a robot that could perfectly defeat walls from a certain starting position for example. But no where else.

Chase18:29, 18 January 2013
 

That's pretty cool you tried it with overall robot behavior. That's definitely on another level in terms of complexity. I think that's usually the distinction between "Genetic Algorithms" (your code interprets the "DNA" string and alters its behavior in predefined ways) and "Genetic Programming" (your "DNA" string is translated into real code that itself is run as the bot's behavior).

Voidious22:11, 18 January 2013
 

It was the latter. It took about 100 generations (each generation with 1000 randomly mutated samples). I didn't take into consideration genetics natures. I thought it would evolve like how we write robots. You know a radar, some basic back and forth movement. Nope, everything spinned, it just happened it stumbled upon a perfect speed and turning ratio to move from its start location (to dodge all walls shots) and a perfect speed to spin its gun and fire to hit walls.

But this was from specific stating positions and angles, which I had programmed into the battle to remove all the extra 'noise' so I could run shorter battles and still get a good sample. Since neither walls or the robots genetics had any random function, I figured it was safe.

After all that work, I decided not to pursue it further. Since it was long boring, and I could write a better robot in 5 minutes.

Chase06:10, 19 January 2013

Preserving the 'noise' is useful in these situations to avoid over-fitting. Although using only Walls in the fitness function will still produce a bot which is only good against it.

MN13:23, 19 January 2013
 
 

Thanks again!

Going to be trying this out to tune my gun, which is otherwise very simple. Since I somewhat don't have as much patience to run the millions of battles otherwise required. Thanks!

First however I have to download the 2 gb of test data. :)

Chase22:53, 12 January 2013

Cool, good luck! I have a newer/bigger data set that I'll upload now too. It was collected with an updated TripHammer, based on the post-refactor Diamond code base, and it's 10k battles. The file format is the same, but I did tweak/fix some small things in that refactor, like part of the precise MEA calculations.

Voidious23:15, 12 January 2013
 

10k battles of 35 rounds. So that is about 10 seasons?

Chase00:35, 13 January 2013
 

It's 25 seasons of a 400 bot test bed using 35-round battles. Upon realizing it's 12 gigs, I'm splitting it into 5-season chunks for easier downloading. =)

Voidious00:41, 13 January 2013
 

Thanks for that! :)

Chase17:28, 13 January 2013
 

Got the first 2 posted: [1] [2] I'll put proper links on the page when all 5 are up, should be by end of day.

Voidious18:15, 13 January 2013