Thread history

Fragment of a discussion from Talk:Oculus
Viewing a history listing
Jump to navigation Jump to search
Time User Activity Comment
No results

I don't think that it is because of a bug. Every movement I make is really bad. My old bots were built on BasicGFSurfer so no bugs I think.

Dsekercioglu (talk)18:45, 29 August 2017

Hmmm, it's neural, right? Is there any case of success of neural surfing besides Darkcanuck's bots?

Rsalesc (talk)19:20, 29 August 2017

As I know Only Pris uses Neural Surfing. Even if this movement is one of the worst in top 100 it is my best movement=). I think that I should tune it more against mid-level guns so it would be better against general.

Dsekercioglu (talk)19:31, 29 August 2017
 
I found the problem. It can't dodge fast guns.
WaveSurfingChallengeBotA
95.66
WaveSurfingChallengeBotB
74.4
WaveSurfingChallengeBotC
56.58
As you can see results against B and C is really low. I will probably add initial LT/CT/HOT guns.
Dsekercioglu (talk)20:10, 30 August 2017

Wow, was looking for this (old) challenge! Thank you, MultiplyByZer0.

Regarding the experiment, I know NN has a problem of slow learning since there isn't much data at the beginning of the game. Couldn't a reinforcement of firing waves in the first few rounds solve the problem, though? Another suggestion would be to use waves with low virtuality (those tick waves which are close to a firing tick) to suppress the lack of information without polluting the network with flattening-like data right on the first rounds.

I did something like that in my gun and it improved a lot. Of course, I'm no reference in Neural Targeting: I improved from a really bad gun to a miserable one :) Well, maybe you've already done that after all.

Rsalesc (talk)21:05, 30 August 2017
    • Thank you MultiplyByZer0. I found out which part of my movement was weak.
    • I will probably solve the fast targeting methods problem with initial predictors. It is hard to learn how CT/LT guns fire since they use velocity/heading/heading change which I don't use.
    • I don't think that Gun needs more data. There is already tons of data. I generally use a weighted crowd system which gives pretty good results both in TCAS and TCRM. I don't think about the gun too much right now.
Dsekercioglu (talk)21:59, 30 August 2017
 

You should be getting 99+% against Bot A. If not, you have bugs in your surfing, or you aren't even attempting to control your distances. Look to Komarious or CunobelinDC for help here, and make sure you are predicting the same escape angles you intend to move in.

Once you've done that, if you aren't getting 95% against Bot B, you might still have bugs in your surfing in the attribute collection, or you need to improve your learning. This is the most simple learning, a simple linear relationship between forward velocity and guess factor. If your learning algorithm can't quickly learn a simple linear relationship, you need to rethink it. I would suggest using a super simple learning (8 bins for the velocity value, plus a lower weighted "all the data") to make sure you have the attribute collection correct, then move on to fixing your learning.

Finally, you should be getting 90% against Bot C. This can only be improved by adding better attributes that you think might inadvertently model your near-wall and escape-angle behavior.

Hope this helps. Better scores are of course possible, but are very design specific. However with early non-DC versions of Cunobelin I was able to get a 99.9 - 96.8 - 95.1 score, and this was just BasicSurfer with segmented learning and distancing.

Skilgannon (talk)22:09, 30 August 2017
I just looked a little bit more carefully. I found three reasons of getting hit.
    • Unknown Wave
    • Bot width calculation bug
    • Not enough preciseness
I will fix them all and try again.

Yes I found a bug in bot width calculation.

Dsekercioglu (talk)22:26, 30 August 2017
 
Thank you, I found bugs in MEA calculation, bot width and precise prediction for True Surfing.
Dsekercioglu (talk)16:06, 31 August 2017

If there's one thing I learned from Robocode, there are always more bugs. Sometimes the bugs aren't even in the code, but in the assumptions the code was written with. That second category can't be caught with tests, just by looking for deviations from expected behaviour and being smart. The first category can be solved with just a bunch of hard, boring verification work.

Skilgannon (talk)14:34, 3 September 2017