BerryBots updates

Fragment of a discussion from User talk:Voidious
Jump to navigation Jump to search

I finally got around to writing a learning gun for BerryBots. At first I was just out to port Rednaxela's kd-tree to Lua, since I thought that was a prerequisite to the gun I wanted, and a prerequisite to testing correctness of a kd-tree is writing an optimized linear KNN search. But once I had a linear KNN search, I realized it would be plenty fast enough to run the targeting challenge stage, which is only a few thousand ticks per enemy with a decent gun, so I built out the rest of the gun.

So it's mostly just KNN with waves / displacement vectors. A precise MEA and GuessFactors is probably worth pursuing. But I think the next thing to start to handle is the presence of walls, which has a lot of ramifications.

It came out pretty good: testgun.lua. Not much code, runs pretty fast, reasonably effective. And it's a new high score on Laser Gallery.

Voidious (talk)00:26, 14 August 2013

Ahh, neat.

With regards to handling the presence of walls, that's a case where I see the utility of PIF versus displacement vectors and guessfactors as being significant. Guessfactors aren't really well-suited to the presence of walls. Displacement vectors allow you to handle walls occluding where they end up. PIF however allows you to handle not only walls occluding where they end up, but also handle ruling out paths which would be interrupted by walls.

Rednaxela (talk)15:04, 14 August 2013
 

Yeah, agreed that PIF has some key advantages. Checking wall collisions every PIF tick might be slow, I'm not sure. There's no trig, but it would be (# walls) * (# projections) * (# ticks per projection) line intersection checks - not sure how much that adds to the PIF CPU load.

I'm excited to try this in Melee, too, since you can see everyone at once. Though I may need that kd-tree pretty quickly.

Voidious (talk)16:56, 14 August 2013

You don't need to do that many line intersection checks. If you store your walls in a tree (i.e. r-tree or bucket kd-tree), or even a sorted list (sorting by one of the two axis), you can avoid needing to check every single wall by quickly ruling out large groups of walls that are too far away.

If your robot is using a pathfinding algorithm to navigate the walls, you could re-use that structure to find which walls you need to check (if any) to avoid needing new structures to search walls (i.e. track the current pathfinding node, and use that node's information to know which boundaries to check on a tick if any).

To save per-tick iterations, you can also use a trick I've used and seen used in PIF implementations: Based on the minimum number of ticks away waves/walls are, you can skip ahead in your PIF history, ignoring all the ticks inbetween.

In other words... nah... you can more or less reduce both the (# walls) term and the (# ticks per projection) term to much smaller numbers.

Rednaxela (talk)00:07, 15 August 2013

Thanks for the ideas! I love the kd-tree/sorted list one. I might have to go back and optimize it in the main game engine.

Voidious (talk)16:36, 15 August 2013

I'd say for that purpose r-trees are more suited than kd-trees, but yeah. From what I've heard, it's very much standard practice to use r-trees or other structures for collision detection in game engines where you have a very large number of entities/surfaces/etc which you may want to perform collision detection with.

With regards to 'other structures', so long as things are in a bounded size region and the density variation of objects is not too extreme, rather than a tree you'd probably get better results taking a very simple approach: Divide the area into a 2d grid, scaled so that you have a no more than a few entities in each cell, and in each cell store a list of entities which are within that cell. The only reasons to use a tree instead of a coarse array, is when the density of collidable objects/surfaces varies significantly between areas (i.e. you don't want lots of cells that are near-empty with other cells that have thousands of objects)

Rednaxela (talk)17:09, 15 August 2013
 
 

So thinking this through now, the walls in BerryBots have some interesting implications for PIF, too. For instance, if a log you're replaying includes that bot hitting the wall, what do you do? Or if a replay would cause you to hit the wall? In Robocode you might throw those out - it's rare and kind of a flukey situation, since you mostly never want to hit the wall. But there's no real disincentive to hitting walls in BerryBots, so it could be super frequent. Maybe you instead replay until the wall hit, then do a sort of single tick style re-projection for similar situations from that point on. And you probably want to precisely predict the wall bounce physics too.

Voidious (talk)01:12, 18 August 2013

Maybe what would make sense would be "fuzzy" wall collision handling that validates the projection against what happened in the PIF log:

  1. Detect and log when the enemy hits a wall as part of the PIF log.
  2. If the log that is being replayed does not contain a wall collision at a similar time, throw out projection if it collides with the wall by more than a certain margin (give leeway for the projection barely glancing by it)
  3. If the log that is being replayed does contain a wall collision, throw out the projection unless the projection also contains a wall collision at a similar time (or at least a near-collision)

The idea would be to filter projections based on an approximate presence/absence of a collision matching the PIF log, in order to ensure the scenario of that projection is sufficiently similar to what you'll be predicting.

Rednaxela (talk)15:48, 19 August 2013

Very cool ideas. I guess you might need to allow for more fuzziness depending on the wall layout / wall hit frequency. I still like the idea of re-projecting from time of wall hit, but I'd really have to just try both and compare because I just don't have much sense of which would work better.

It also makes me wonder about doing the same in Robocode PIF. You could compare snapshots along the projections to further narrow down which situations match most closely or throw out irrelevant projections. You wouldn't have as many wall hits to compare, but you could still compare wall hits and distances or other stuff.

I guess the problem is you mainly want to examine stuff that causes you to alter your path, which requires knowing the future unless you're talking about walls that don't move. But if a past move log brings the enemy far away from all other bots on the field, and projecting that log now causes them to run into a swarm of bots, that might not be tough to predict and easy to realize you should throw it away.

Voidious (talk)20:59, 21 August 2013

That reminds me of some things I was pondering during the development of Glacier. I'd say Glacier's enemy distance segments are among the better ones in the meleerumble, but one thought I had but never got around to trying, was the notion of "single tick" style prediction of the whole field of other bots simultaneously, to predict the trend of group interactions rather than merely predicting an individual one's future behavior from it's immediate surroundings. After all, one bot making an agressive movement in one corner of the field could have a chain reaction causing bots on the other end of the field to move a bit, but the usual targeting systems aren't prepared to consider that.

If one wanted to really get tricky with that, there are ways one could use that "single tick" prediction in movement too... but yeah...

Rednaxela (talk)21:39, 21 August 2013

I had similar thoughts while making demonicRage, but never realized the value till reading yours. I think it would make a big improvement against advanced bots.. An "easy" way to test a diluted version of your concept, would be to predict the weekest bot on the field normally, then predict next weekest bot using the weaker bots predicted location and data... then predict 3rd bot using the 2 weaker bots predicted data..and so forth, so that the strongest opponent is predicted using all the predicted data of all the other bots.... That diluted method would likely be quick to jimmy up..

  • edit* on second thought that breaks the 'bots interaction' part of your concept :) oh-well :P
Jlm0924 (talk)18:11, 22 August 2013
 

Instead of classifying data using a single opponent at a time, classify by all opponents at the same time.

When predicting opponent´s A behaviour, instead of using only opponent´s A distance and velocity as input, use distance and velocity from other opponents too. The same principle applying to any kind of classification.

I thought about this before, but didn´t know how to deal with eliminated opponents. Thinking again, now I have some ideas.

MN (talk)14:18, 23 August 2013

One thing I've tried is attributes based on the force coming from an Anti-Gravity calculation. I thought it was going to be a killer feature, but despite being fairly rigorous to make sure it was doing what I wanted, I never got a performance gain out of it.

It seems like there must be a way to leverage that data, though.

Voidious (talk)17:12, 23 August 2013

If everyone used anti-gravity movement, assigned and weighted points the same way, it would work wonders.

Combat uses anti-gravity movement, but weights points differently, and also assign anti-gravity points on enemy virtual bullets (shrapnel dodging), which are invisible to opponents. Good luck to anyone trying to guess the resulting anti-gravity force.

Many intermediate anti-gravity bots also assign random anti-gravity points accross the battlefield in order to confuse opponents.

MN (talk)18:55, 23 August 2013
 

I'd say "use distance from other opponents too" and "force coming from an Anti-Gravity calculation" are similar to what I used in Glacier's targeting actually. I used several dimensions which (more or less) measured the closest distance to another bot at different angle ranges.

With anti-grav systems, different ones will weight things differently, but in all cases the most prominant influences are the closest ones. As such, my goal was to create a measure with a small finite number of dimensions, that characterized the most prominant influences meaningfully.

In any case though, all those methods don't really consider the field-wide interactions that can occur, where movements can cause a chain reaction. For that, I doubt there is much of a reasonable option besides some sort of iterative process (maybe not per-tick, could be larger steps or iterations that are not time-based, but yeah).

Rednaxela (talk)20:15, 23 August 2013

Interesting. In Neuromancer I make some KNN attributes relative to the closest enemy, so distance, latVel, advVel. Then I use PIF so it doesn't matter that the attributes were from somebody else's perspective.

Skilgannon (talk)20:29, 23 August 2013

PIF as well here. Sounds to me like your attributes are more done with a low density of melee bots in mind (i.e. closest enemy having much more influence than the second and third closest), whereas my attributes are mode done with a high density of melee bots in mind. Interesting.

Rednaxela (talk)21:06, 23 August 2013