Did the flattener help against weak bots?

Jump to navigation Jump to search

Did the flattener help against weak bots?

So, the best version that i ever had was 1.95, by a fairly large margin. In 1.96, I fixed a bug where I had accidentally had the flattener on for any bot hitting me more than (normalizedProbability-0.045) which happens with way more bots than a flattener would usually be used for (it corresponds to roughly a 7% hit rate). Is it possible that the flattener helps?

AW02:47, 9 November 2012

I think there's only one way to find out. :-) Try lowering the threshold?

But I'd say it largely depends on how heavily your flattener is weighted, how your hit percentage is normalized, and so on. A 7% hit rate might be low but it really depends on the distance and bullet power, so I'd have to see how you're normalizing it to say.

Diamond's main flattener is enabled at hit rate = 5.9%, normalized to precise escape angle = 0.98 and precise bot width of 0.1 radians, which is something like bullet power = 2 and distance = 500. I also found a little bit of improvement with another lower weighted flattener enabled at half that hit percentage. So maybe that can give you a ballpark idea of how high/low your threshold is compared to mine, which I think is pretty well tuned. I also add a margin of error based on number of shots, like they use to calculate margin of error in election polling, which lets me set it as low as possible without much risk. (So it's only really 5.9 after a lot of shots, early on it's quite a bit higher.)

Voidious03:39, 9 November 2012
 

Another thought is don't be afraid to just re-release 1.95 as 1.95b. There could have been some weird battles or something and you are just chasing shadows - I've definitely had that happen.

Voidious17:07, 13 November 2012
 

Well, the re-release is showing that 1.95 really was the strongest by a large margin. When I said I had the flattener on, I was actually being imprecise, what I had was one of my "semi-advanced" classifiers being trained with visits and hits and one of my flatteners being trained with only hits. Regardless, I ran some tests this morning that show 1.99.5's gun as being weaker than 1.95. I had previously spent all my time searching for something i broke in the movement when I "fixed" the gun, but I guess I broke something in the gun instead...

My changes to the gun (from Gilgalad's targeting strategy page): Version 1.99.4 is, I think, the first robot to handle virtual waves exactly. After a real wave is fired, the virtuality and bullet power attributes for the real waves are set. Bullet power is a weighted average of the two real waves around the virtual waves (in terms of time). At training time, the waves play through a log of positions with the new bullet power. Attributes, which can depend on bullet power, are calculated only when training and aiming (aiming uses the estimated bullet power).

That should only improve the accuracy if done correctly, right?

AW17:02, 29 November 2012
 

So you have two attributes, virtuality and bullet power. Virtuality is 0 at fire time and scales linearly to 1 halfway between firing bullets? If so, I certainly agree with that and it's what I do in Diamond.

And bullet power is average of the two firing waves. I'm not sure about that part. The enemy hasn't seen the second wave's bullet power until it's fired, so how could it be an input to predict his movement before then? Or if you're looking at it as both bullets being in the air during flight time, shouldn't it instead be the weighted average of all bullets in the air over the course of the bullet's flight time? (And FWIW, I barely weight bullet power as an attribute and it's a pretty recent addition.)

I definitely think virtuality is the right way to handle gun heat as an attribute, but I haven't really proven it outperforms other approaches in KNN guns.

Voidious17:30, 29 November 2012
 

Well, the bullet power interpolation is based on the same idea as virtuality. If the enemy reacts to my firing of bullets in a way that depends on bullet power, then I assume their movement between waves gives information that is relevant to firing (if I didn't make this assumption, I wouldn't use non-firing waves). So the question is what relation does the supposed bullet power of the virtual wave have to the real waves. I assume that the closer it is the the real wave, the more similar the supposed power should be to that real wave. This is mostly based on the assumption that they move with a raiko like movement (random chance of changing directions every turn). If this is true, they react to the most recent bullet power whenever a new bullet is fired, but the approximation I chose was assuming the virtual waves were linearly scaled between the two surrounding real waves. Two things to note:

1) I don't just use this for the bullet power attribute, I use it for everything dependent on bullet power (bullet flight time, wall distance, etc.)

2) I added this feature when I added had a bullet power selection function that often changed radically between two shots.

AW00:02, 9 December 2012
 

Hmm, I don't know, I just really don't. And the wave speed changes with bullet power too, right? I can think of a few issues to consider.

One is that this sort of double penalizes non-firing waves. It's true that a non-firing wave might capture the wrong bullet power (input) to the firing angle (output) that it yields. But we're already weighting against this data with the high virtuality, because it is fuzzier and semi-duplicate data. Does further modification of the virtual wave data screw up how much we're weighting against non-firing waves with virtuality?

I've had some counter-intuitive results come up with wave bullet powers / speeds. For instance, against bots that don't react to bullet fire, shouldn't you be able to fire waves at every bullet power you might use, and always aim based on waves collected with the correct bullet power? My tests with stuff like this never pan out. I feel like aiming with any bullet power other than what you're really using is bad news. So if you're switching between power=1.9 and 3.0, you might end up capturing a lot of power=2.5 waves that are less useful than either of the other two.

I think it's a good idea, but I can't say for sure I think it's a slam dunk, I just think you'd need to run a lot of tests to really be sure. This does strike me as the kind of stuff we'll have to explore to further improve how we classify data in Robocode.

Voidious00:47, 9 December 2012
 

Well, firing waves at every bullet power you might use is very similar to PIF... However, notice that they could be indirectly reacting to fire power by doing something like changing directions randomly every time a bullet is fired (and bullets are fired more frequently at lower powers). If a robot just had a threshold for changing direction and evaluated that every turn, regardless of bullet fire, I would expect firing multiple bullets at different powers to be effective, but probably not worth the extra CPU.

AW21:46, 9 December 2012
 

Also, about testing this, a targeting challenge doesn't really work since the bullet power is almost always 3. I could run a TC with a bullet selection algorithm, but that leaves the choice of which one. Which is complicated since the main advantage of this new idea is that it should minimize the penalty for radically switching bullet power. So if i don't do that it is sort of pointless, but the other algorithm isn't designed for that, so an increased score could just come from better bullet power or a broken approach...

In short: what bullet power selection should I test this with?

AW21:53, 9 December 2012