Raven

From Robowiki
Revision as of 16:10, 8 June 2021 by Dsekercioglu (talk | contribs) (Add details about the current PyTorch Tuner)
Jump to navigation Jump to search

Background Information

Bot Name
Raven
Author
Dsekercioglu
Extends
AdvancedRobot
What's special about it?
Raven uses a form of Go to surfing where it procedurally generates paths without aiming for any point.
Great, I want to try it. Where can I download it?
https://www.dropbox.com/s/ln53uvb3ddxe4bv/dsekercioglu.mega.Raven_3.56j8.jar?dl=1
How competitive is it?
Its best is 7th place but considering how close it is to Gilgalad, 7.5th would be the right word =)
Credits
Rednaxela, Skilgannon, Nat, Starrynyte and the other contributors I am unaware of for the FastTrig class.
Skilgannon for the bugless, fast Kd-tree.
Cb for the non-iterative wall smoothing.
Rozu for the precise prediction code.
Chase-san for the intercept method I used in my PPMEA calculations.
AW for giving me the idea of integrating the danger function in order to get the danger value for a given bot width.
Kev for inspiring me to use pytorch based on my loose estimate on how BeepBoop works.

Strategy

How does it move?
A form of Go To Surfing.
It falls back to True Surfing when there is no bullets mid-air.
How does it fire?
It uses GuessFactor with KNN.
How does it dodge bullets?
Tries to minimize the number of guess factors it gets hit by based on their weights and damage.
What does it save between rounds and matches?
Between rounds, it saves the kd-trees. Between matches, it doesn't save anything.

Additional Information

Where did you get the name?
It just popped into my mind and I thought it would be a proper name for a bot with machine learning.
Can I use your code?
Yes, I tried to make the code as clean and understandable as possible.
What's next for your robot?
A proper versioning system so I don't keep accidentally releasing experimental versions.
Faster code so it doesn't run slow and doesn't skip turns.
Better bullet shadow calculations.
Tuning the guns since they haven't been tuned since the first version.
Gun Heat Waves.
Maybe a pre-trained movement or gun to use in the first ticks of the battle.
Add a flattener that actually improves its scores against adaptive targeting.
Improve the pytorch tuned targeting system
Pytorch Tuner
The current tuning system is very naive and rather experimental.
The formula used for transforming the data points is ax + bx^2
For each datapoint, Guess Factor pair, it finds the K closest and K furthest Guess Factors in the given match and saves corresponding weights.
Then the transformer is trained to minimize (NN(input) - NN(kClosest))^2
The obvious flaw with this system is that the optimal solution would be making all weights 0.
This is prevented in a rather inelegant way:
All the A terms are normalized so that the sum of their absolute values is 1
The B terms are clipped so that they can't be smaller than 0 so they can only increment the weights
This also makes sure that all transformations are one to one formulas(for the better?).
It is important to normalize the transformer weights and the outputs as the optimal solution to this kind of problem is returning the smallest number possible.
Does it have any White Whales?
Drifter has been crushing the latest versions.
Ever since I realized memory allocations and deallocations weren't free, the true White Whale is the Java GC :)
What other robot(s) is it based on?
It's kind of based on WhiteFang, I have tried to copy the design but make it as precise as it can be.