Wiki Targeting/Dynamic Segmentation
- Sub-pages:
- Wiki Targeting - Dynamic Segmentation - Code - Archived Talk 20090501
A dynamic segmentation algorithm - By Vic Stewart
The original Zoom Targeting concept was a good algorithm for Dynamic Segmentation. But, it had one major drawback: unacceptable memory consumption. I discarded the idea because I could not solve this problem. However, a year later I think I have found a solution:
A Node contains pointers to a Leaf class and a Splitter class Only one of the two pointers points to an actual object, the other one is a nullpointer This defines whether a Node is a Leaf or Splitter(Branch) Node.
The first Node starts its life as a Leaf Node.
----------------- | Leaf | | Node | -----------------
In Leaf mode it collects wave data in the form of Observations. The Observations are all kept in the Leaf Node for later use. They are also rolled up into GF arrays as in other GF guns. Obviously this GF array is used for the actual aiming. When the number of Observations grows beyond a certain number, the Node becomes eligable to be split into two children Leaf Nodes.
When splitting is triggered, the Node will turn itself into a Splitter Node. It now has to choose one Dimension/parameter by which it can seperate its data. The Node can choose between any number of Dimensions. All Dimensions are individually tested to find the most useful splitting Dimension. The Dimension which comes out on top is used for the actual splitting. For example, the Node may choose DISTANCE as its most useful splitting criterium. Observations with a distance < 500 will be sent to the first child, Observations with a distance >= 500 will be sent to the second child.
The Parent Leaf Node now becomes a Splitter (Branch) Node.
----------------- ----------------- | Leaf | | Leaf | | Node | | Node | ----------------- ----------------- \ / \ / ----------------- | Splitter | | Node | -----------------
Waves are now sent, through the Splitter Node, to the two Leaf Nodes, until one of the Leaf Nodes will again be triggered to Split.
----------------- ----------------- | Leaf | | Leaf | | Node | | Node | ----------------- ----------------- \ / \ / ----------------- ----------------- | Leaf | | Splitter | | Node | | Node | ----------------- ----------------- \ / \ / ----------------- | Splitter | | Node | -----------------
This process is repeated as the match progresses.
About memory consumption:
Suppose the Splitting threshold is 40. That means the average Leaf Node will have 30 Observations (20 for the brand new ones, 39 for the ones just about to be split). In an average 35 round match, 30000 waves will be collected. That means after 35 rounds we will have approx. 30000/30 = 1000 Leaf Nodes. For 1000 Leaf Nodes there are 999 Splitter Nodes. That means memory consumption will be not a problem. A lot of memory will be consumed by the Leaf Nodes, but nothing compared to the amount of memory the original ZoomTargeting algorithm could devour. Of course, there will be more memory consumption than your average GF gun has, that is because all wave data/Observations are held in memory, next to the GF arrays.
In theory, this algorithm has some interesting properties:
- It can have many more segmentation dimensions than any previous algorithm. Even the most silly parameter can be used (TIME_TILL_NEXT_BULLET :-) because if they do not perform, they are not used.
- some dimensions may only be used against specific enemies. For instance Tron spends a lot of time right in the tip of corners. Detailed corner parameters are very useful against Tron. Against other bots these parameters may not be useful, and therefore will not be used. The algorithm finds the most useful parameters.
- some dimensions may only be used in very specific situations, defined by a certain combination of other dimensions. For instance, when WALL_DISTANCE==300, CORNER_DISTANCE will be of no use. But TIME_TILL_LAST_SIGN_CHANGE could be very important in this case, because the opponent is not hampered by any walls. The reverse is also true: When BLIND_MANS_STICK_FRONT_WALL==20 (the bot is nearly colliding with a wall) the TIME_TILL_LAST_SIGN_CHANGE parameter will be totally useless, because the bot has to reverse or turn anyway, regardless of the value of TIME_TILL_LAST_SIGN_CHANGE. This algorithm will find those dependancies.
There is an interesting spinoff for this algorithm:
Tree Serialization
The tree structure can be serialized by only 4 bits per Splitter Node in the case of a maximum of 16 Dimensions. This is true because the only thing a Splitter node has to know to be able to do its job, is the Dimension with which it must split the data.
The filesize for the above example would become: 999 Splitter Nodes * 4 bits = 3.6 kB unzipped. The resulting file could be read by the robot in a next match, which will dramatically increase its learning speed. Note that no guessfactors are actually saved, only the tree structure.
The Serializing could be limited to a subset of Splitter Nodes, to fit into 1kB files. That would make it possible to save data on all RoboRumble participants. Despite the fact that not the entire tree structure is saved, it may give the bot enough of a headstart to outperform its opponents. Off course the robot will expand the tree itself using waves it collects during the match.