BerryBots updates
The highlighted comment was created in this revision.
So I try not to spam you guys too much about BerryBots, but maybe I'll post about some of the bigger stuff.
A couple people have posted bots on the forums recently (both Robocoders):
- Frohman posted the first public bot, Insolitus, a battle bot that beats supersample.BasicBattler in 1v1 and draws about even with it in Melee (which means it also crushes the other samples). Some type of antigravity movement and linear targeting.
- Justin (of DemonicRage) has a bot called NightShade. It's the strongest 1v1 bot for now, also supports races and mazes, and has pretty sexy debugging graphics.
Next big thing coming is the Game Runner API, which I'm pretty excited about. I have it working now, probably releasing it in a week or two with some polish and related changes to results handling. Some cool things are that it handles multi-threading for you and makes it really easy to prompt the user for inputs (seasons, threads, ships, stages, etc). You launch a Game Runner script through the app kind of like starting a battle.
Just released v1.2.0 with the Game Runner API. I think it holds up pretty well in a comparison against the Robocode control API - it handles multi-threading under the covers, lets you configure input parameters and prompt the user for those fields with a graphical dialog, and includes some examples that do basic batch battles or a single-elimination tournament.
- Video overview
- BerryBots v1.2.0 details
- batchduels.lua gives a good idea how easily you can implement a basic multi-threaded batch battle runner.
I feel like this was the final prerequisite to the kind of real bot development like I'm used to. So I'm pretty stoked on that. :-)
And now that Game Runner is out, I added a leaderboard for the LaserGallery stage. Might work on a gun myself sometime soon: LaserGallery/Scores
This is pretty cool - HTML5 replays: 6-bot melee on battle1
(Sorry, seems broken on Linux/FireFox, still troubleshooting that one.)
I finally got around to writing a learning gun for BerryBots. At first I was just out to port Rednaxela's kd-tree to Lua, since I thought that was a prerequisite to the gun I wanted, and a prerequisite to testing correctness of a kd-tree is writing an optimized linear KNN search. But once I had a linear KNN search, I realized it would be plenty fast enough to run the targeting challenge stage, which is only a few thousand ticks per enemy with a decent gun, so I built out the rest of the gun.
So it's mostly just KNN with waves / displacement vectors. A precise MEA and GuessFactors is probably worth pursuing. But I think the next thing to start to handle is the presence of walls, which has a lot of ramifications.
It came out pretty good: testgun.lua. Not much code, runs pretty fast, reasonably effective. And it's a new high score on Laser Gallery.
Ahh, neat.
With regards to handling the presence of walls, that's a case where I see the utility of PIF versus displacement vectors and guessfactors as being significant. Guessfactors aren't really well-suited to the presence of walls. Displacement vectors allow you to handle walls occluding where they end up. PIF however allows you to handle not only walls occluding where they end up, but also handle ruling out paths which would be interrupted by walls.
Yeah, agreed that PIF has some key advantages. Checking wall collisions every PIF tick might be slow, I'm not sure. There's no trig, but it would be (# walls) * (# projections) * (# ticks per projection) line intersection checks - not sure how much that adds to the PIF CPU load.
I'm excited to try this in Melee, too, since you can see everyone at once. Though I may need that kd-tree pretty quickly.
You don't need to do that many line intersection checks. If you store your walls in a tree (i.e. r-tree or bucket kd-tree), or even a sorted list (sorting by one of the two axis), you can avoid needing to check every single wall by quickly ruling out large groups of walls that are too far away.
If your robot is using a pathfinding algorithm to navigate the walls, you could re-use that structure to find which walls you need to check (if any) to avoid needing new structures to search walls (i.e. track the current pathfinding node, and use that node's information to know which boundaries to check on a tick if any).
To save per-tick iterations, you can also use a trick I've used and seen used in PIF implementations: Based on the minimum number of ticks away waves/walls are, you can skip ahead in your PIF history, ignoring all the ticks inbetween.
In other words... nah... you can more or less reduce both the (# walls) term and the (# ticks per projection) term to much smaller numbers.
Thanks for the ideas! I love the kd-tree/sorted list one. I might have to go back and optimize it in the main game engine.
I'd say for that purpose r-trees are more suited than kd-trees, but yeah. From what I've heard, it's very much standard practice to use r-trees or other structures for collision detection in game engines where you have a very large number of entities/surfaces/etc which you may want to perform collision detection with.
With regards to 'other structures', so long as things are in a bounded size region and the density variation of objects is not too extreme, rather than a tree you'd probably get better results taking a very simple approach: Divide the area into a 2d grid, scaled so that you have a no more than a few entities in each cell, and in each cell store a list of entities which are within that cell. The only reasons to use a tree instead of a coarse array, is when the density of collidable objects/surfaces varies significantly between areas (i.e. you don't want lots of cells that are near-empty with other cells that have thousands of objects)
So thinking this through now, the walls in BerryBots have some interesting implications for PIF, too. For instance, if a log you're replaying includes that bot hitting the wall, what do you do? Or if a replay would cause you to hit the wall? In Robocode you might throw those out - it's rare and kind of a flukey situation, since you mostly never want to hit the wall. But there's no real disincentive to hitting walls in BerryBots, so it could be super frequent. Maybe you instead replay until the wall hit, then do a sort of single tick style re-projection for similar situations from that point on. And you probably want to precisely predict the wall bounce physics too.
Maybe what would make sense would be "fuzzy" wall collision handling that validates the projection against what happened in the PIF log:
- Detect and log when the enemy hits a wall as part of the PIF log.
- If the log that is being replayed does not contain a wall collision at a similar time, throw out projection if it collides with the wall by more than a certain margin (give leeway for the projection barely glancing by it)
- If the log that is being replayed does contain a wall collision, throw out the projection unless the projection also contains a wall collision at a similar time (or at least a near-collision)
The idea would be to filter projections based on an approximate presence/absence of a collision matching the PIF log, in order to ensure the scenario of that projection is sufficiently similar to what you'll be predicting.
Very cool ideas. I guess you might need to allow for more fuzziness depending on the wall layout / wall hit frequency. I still like the idea of re-projecting from time of wall hit, but I'd really have to just try both and compare because I just don't have much sense of which would work better.
It also makes me wonder about doing the same in Robocode PIF. You could compare snapshots along the projections to further narrow down which situations match most closely or throw out irrelevant projections. You wouldn't have as many wall hits to compare, but you could still compare wall hits and distances or other stuff.
I guess the problem is you mainly want to examine stuff that causes you to alter your path, which requires knowing the future unless you're talking about walls that don't move. But if a past move log brings the enemy far away from all other bots on the field, and projecting that log now causes them to run into a swarm of bots, that might not be tough to predict and easy to realize you should throw it away.
That reminds me of some things I was pondering during the development of Glacier. I'd say Glacier's enemy distance segments are among the better ones in the meleerumble, but one thought I had but never got around to trying, was the notion of "single tick" style prediction of the whole field of other bots simultaneously, to predict the trend of group interactions rather than merely predicting an individual one's future behavior from it's immediate surroundings. After all, one bot making an agressive movement in one corner of the field could have a chain reaction causing bots on the other end of the field to move a bit, but the usual targeting systems aren't prepared to consider that.
If one wanted to really get tricky with that, there are ways one could use that "single tick" prediction in movement too... but yeah...
I had similar thoughts while making demonicRage, but never realized the value till reading yours. I think it would make a big improvement against advanced bots.. An "easy" way to test a diluted version of your concept, would be to predict the weekest bot on the field normally, then predict next weekest bot using the weaker bots predicted location and data... then predict 3rd bot using the 2 weaker bots predicted data..and so forth, so that the strongest opponent is predicted using all the predicted data of all the other bots.... That diluted method would likely be quick to jimmy up..
- edit* on second thought that breaks the 'bots interaction' part of your concept :) oh-well :P
Instead of classifying data using a single opponent at a time, classify by all opponents at the same time.
When predicting opponent´s A behaviour, instead of using only opponent´s A distance and velocity as input, use distance and velocity from other opponents too. The same principle applying to any kind of classification.
I thought about this before, but didn´t know how to deal with eliminated opponents. Thinking again, now I have some ideas.
One thing I've tried is attributes based on the force coming from an Anti-Gravity calculation. I thought it was going to be a killer feature, but despite being fairly rigorous to make sure it was doing what I wanted, I never got a performance gain out of it.
It seems like there must be a way to leverage that data, though.
If everyone used anti-gravity movement, assigned and weighted points the same way, it would work wonders.
Combat uses anti-gravity movement, but weights points differently, and also assign anti-gravity points on enemy virtual bullets (shrapnel dodging), which are invisible to opponents. Good luck to anyone trying to guess the resulting anti-gravity force.
Many intermediate anti-gravity bots also assign random anti-gravity points accross the battlefield in order to confuse opponents.
I'd say "use distance from other opponents too" and "force coming from an Anti-Gravity calculation" are similar to what I used in Glacier's targeting actually. I used several dimensions which (more or less) measured the closest distance to another bot at different angle ranges.
With anti-grav systems, different ones will weight things differently, but in all cases the most prominant influences are the closest ones. As such, my goal was to create a measure with a small finite number of dimensions, that characterized the most prominant influences meaningfully.
In any case though, all those methods don't really consider the field-wide interactions that can occur, where movements can cause a chain reaction. For that, I doubt there is much of a reasonable option besides some sort of iterative process (maybe not per-tick, could be larger steps or iterations that are not time-based, but yeah).
Interesting. In Neuromancer I make some KNN attributes relative to the closest enemy, so distance, latVel, advVel. Then I use PIF so it doesn't matter that the attributes were from somebody else's perspective.
Back at this and the replay support's almost ready. Now with ship/stage output consoles, video player controls, and results at the end! [1] [2] [3] [4] [5] Still a few loose ends and optimizing to be done. (And I think Linux/FireFox, the combo specifically, is still screwed up.)
Some things I'm really excited about:
- The data collection is pretty fast and low overhead, so I can leave it on all the time in the engine. Last I checked it was a ~5% performance hit, which seems worth it.
- You'll be able to save replays from the Game Runner API. So for anomalous results, you can save the replay (which includes your output console). Like right now I have a 99.7% success rate on Laser Gallery and need to figure out what's up in those 3 of 1000 cases.
- Long term: I'd like to have a web app that lets you select a stage, some sample bots, input Lua code for your own bot, submit form to run match headless in the cloud, and view replay in the browser.
And man, now that I'm testing this with the Game Runner API, I am just salivating. You can run a tourney or a benchmark and literally save every replay to watch later without any problem, besides maybe disk space. (Looks like ~100 bytes per tick in these 1v1 matches I just ran. So 10,000 ticks would be 1 meg.) How cool would it have been to have replays for every match in the Twin Duel every week when it was running?
Sorry to boast, just excited... :-)
I'm not sure I'll make the time, but I don't see any reason we couldn't add this same style of replay support Robocode. All you really need is fast/native buffers to cache all the pertinent data during the match in a lazy/efficient way. Java has ByteBuffers. I re-tested and with BerryBots, it really is only a 5% performance penalty with trivial bots. And of course some memory, but not even that much. And once you have sophisticated bots, the engine itself is only a small % of overall CPU usage anyway.
There's still a fair amount of work in coalescing/parsing the data, saving it, writing the replay, tying it into the GUI and control API, the renderer and all that, but no major technical limitations I can see. The biggest obstacle for me would just be getting my build environment setup to build Robocode, which I've never done before. The Javascript rendering could probably be forked from BerryBots, too, instead of starting from scratch.