View source for Talk:ScalarBot

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
Beaten DrussGT!313:20, 12 October 2017
Adding Bullet Shielding116:29, 4 October 2017
Third Place510:55, 3 October 2017
FastKDE?407:32, 1 October 2017
Tough to beat914:27, 14 September 2017
Adding Bullet Shadowing016:24, 4 October 2017

Beaten DrussGT!

After two battles you have beaten DrussGT. Hopefully in the next battles too. Congratulations.

Dsekercioglu (talk)20:30, 10 October 2017

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:ScalarBot/Beaten DrussGT!/reply.

Yes, ScalarBot has hardly veaten RougeDC and loses against DrussGT now=|*

Dsekercioglu (talk)06:05, 11 October 2017

It's only losing to two bots now ;) Diamond and Shadow, with little disadvantage. Anyway, always remember that older bots are never tuned against newer bots, so defeating the existing ones is not truly a fair competition...

And it takes me ~1.46 APS to achieve that.

Xor (talk)13:20, 12 October 2017
 
 
 

Adding Bullet Shielding

Have you ever thought about adding bullet shielding? You could get extremely high scores against some top bots.

Dsekercioglu (talk)15:58, 4 October 2017

Yes, I've been thinking about that for awhile, even before ShieldBot (which ranks #3 in vote) is created. Anyway, doing bullet shielding would mask some improvements in both gun and movement, therefore I decided not to add it until the last (probably) version.

ShieldBot shares a lot of code with ScalarBot (actually all the framework stuffs are from ShieldBot, and that framework is designed for multi-mode, especially bullet shielding), which makes adding a bullet shield painless (just copy the shield module into ScalarBot, then, done).

Xor (talk)16:26, 4 October 2017
 

Third Place

Congratulations! You got the third place in roborumble. I hope you don't have anything to improve now. =)
Dsekercioglu (talk)06:44, 3 October 2017

Unfortunately, I have a new Anti-Surfer gun and a strong flattener to add to it =) Anyway, I’m taking a break right now. Then, I would work in melee first, though.

Xor (talk)06:59, 3 October 2017

That's impressive! Knight has just caught up with Roborio 1.2.3 (which has no BS), and it has all of these features. It seems that I really have problems with doing baby steps in Robocode :P Your patience to climb the RR bit by bit amazes me, and now I see that coming on melee. No way you won't give a hard time to Cb and Skilgannon. Congratulations!

Rsalesc (talk)08:02, 3 October 2017

Thanks! And congratulations on the achievement of Knight! I tried some refactor in the past but soon went back to the old one, it seems rather hard to catch up with the old one with a full rewrite, but your success motivated me! Waiting for an amazing competition in both 1v1 and melee!

Xor (talk)08:17, 3 October 2017

What a great achievement, congratulations! It's really nice to see so many improvements happening and I am excited about your next updates, in 1vs1 as well as in melee :)

Cb (talk)10:46, 3 October 2017

Thanks! It’s also exciting to see a new melee king ;)

Xor (talk)10:55, 3 October 2017
 
 
 
 
 

I've heard that recent works have done some improvements over traditional KDE (kernel density estimation).

And this project fastKDE contains an implementation.

Any thoughts on that?

Xor (talk)16:23, 30 September 2017

Well, assuming that you are using binning followed by KDE, this process doesn't seem to be anywhere near a bottleneck in Robocode. Or is it? I mean, binning reduces the number of kernel evaluations from quadratic order to linear order if you precompute the results for each possible delta. You still get a quadratic number of additions and multiplications, but it shouldn't be that expensive and even if it is, the biggest improvement I can imagine is using FFT here, which would not have a big impact for the number of bins we usually use in Robocode (I've never seen more than 120). And I don't see any advantage on not using binning if you have more than number_of_bins samples, since the GuessFactor [-1, +1] range is pretty "small".

Anyway, seems to be a really nice article to read :P Maybe those optimizations can work well on swarm targeting?

Rsalesc (talk)23:20, 30 September 2017

Well, afaik DrussGT is using 151 bins in his movement, iirc. And my old experimental anti-aliased VCS gun uses more than 1500 bins (where continuing increasing bins no longer increase performance).

In targeting, DrussGT and ScalarBot (inspired by DrussGT) is using max overlap to reconstruct firing angles, not kernel density estimation, and it's O(nlgn).

Note that by KDE I'm not only mentioning reconstructing firing angles, but also kNN. Actually we do KDE on entire data set, on every dimension, then calculate the conditional density function (reconstruct firing angles).

Anyway, fastKDE is not to accelerate existing computation — but to accelerate the process of getting the real probability density function (which includes computing bandwidth and shape function effectively), with way less samples. You know, in robocode, the sample amount is really restricted, and I think this method is exactly what modern bots needs.

And my thoughts are, the use of kNN in robocode is just some acceleration of KDE. Instead of computing KDE for every data point, we only use the nearest ones.

However, so far, we are using artificial bandwidth & shape function in this process. And I think fastKDE could bring the computation of bandwidth & shape function to robocode.

Xor (talk)02:45, 1 October 2017

I use max overlap in O(nlogn) in Monk in the swarm gun as well because of the great amount of data, and I see those subquadratic approaches as a very nice way to spend more time in other time-consuming tasks. Anyway, looking closer, fastKDE seems to be very useful at first glance, given that it could even be used on top of the existing kNN guns just to weight the queried data more carefully. The real question now is if that's worth understanding and implementing :P That's probably a topic for the future. Maybe you gonna be the first one to put your hands on that?

Rsalesc (talk)05:01, 1 October 2017

Yes that's probably a topic for the future. I'm putting it here to remind me to try it at some point, and I'll always be glad if someone else gonna be the first one. Anyway, some experiments on that is on the way ;p

Xor (talk)07:32, 1 October 2017
 
 
 
 

Tough to beat

Maybe I'm late on noticing this (took some days off from Robocode), but your bot is really tough to beat! Congrats on that, that's amazing since a reasonable portion of the bots on top-10 are pretty much beatable. I'm struggling against ScalarBot as hard as I struggle against Gilgalad. Keep up the good work. And 8% vote is just incredible :)

Rsalesc (talk)13:17, 13 September 2017

Thanks! I'm also surprised a lot by the vote ;) I think my success is largely contributed by the way I'm keeping track of enemy firing ;) I use only one best match from each tree (which can be considered a good source of randomness, from the view of the guns), making the movement really unpredictable for the strongest guns ;) What made this approach is that my Kd-tree, when searching for more than one point, has some bug when I was first developing my movement. All my attempts to use more than one match failed at that time, but it, however, works surprisingly well against the top bots, even without a flattener.

However, with this approach, some simple GuessFactor gun can hit it almost as good as the top guns ;/ Therefore its APS rank is generally much lower than it's PL rank.

And that is also why a lot of top bots are easy to beat — they are really good at dodging GuessFactor guns, making them pretty predictable by the top guns.

Then they use flatteners, but their nature of being easy to hit will never change. Flatteners only work against fast decay lightly segmented guns, not kNN "pattern matchers".

Xor (talk)14:58, 13 September 2017

It seems that the ideal thing would be to figure out a way to learn which class of targeting a robot belongs to. We know that it is harder than predicting how an enemy moves because the enemy give us way less information about his gun, but you could try some conditions (not only hit rate) to estimate this instead of letting your learner figure out everything about enemy's targeting method. I usually hate to mix hard-coded conditions with learning methods, but it could work out here.

Rsalesc (talk)16:18, 13 September 2017

Maybe you don't really need hard coded conditions here — they are really fragile, as you are making strong assumptions about your opponents.

Instead, even with less information, their targeting method will be explosed by the attributes they show after statistic. And why don't let your bot itself to figure out which targeting method it matches the best?

Just hard code a bunch of virtual guns like what EnergyDome do, and select the best. Then dodge.

Xor (talk)02:26, 14 September 2017

That's the thing I try to put into a robot since I've first read this wiki. If we can decide which gun is better, why can't we decide which way to move is? I just intuitively think that the VG results may not come to a reasonable decision. I think it will take some time for it to figure out a difference between a normal GF gun and a top gun, if it ever figure it out correctly. I think there is a huge difference between what's the gun that hit me the most and which is the gun that enemy is using. It can be quite misleading. I would be happy to make this idea work, but I think I' have to come up with something better.

Notice that I understood simple GF targeters as simple, lightly segmented buffers.

Rsalesc (talk)10:34, 14 September 2017

Or, can we partition all guns (without VG array) to three types — non-learning guns, statistical guns, and pattern matchers? HOT, LT, CT would fall in the first type, and Traditional GF guns the second type. Pattern matchers obviously the third type. And a highly segemented GF gun, or a DC gun, would then be something between statistical guns and pattern matchers.

For non-learning guns, the bearing offset they are firing at remain a constant at each situation, and hard code a bunch of them is also easy.

Statistical guns learn slowly, and for a given situation, their firings keep the same for a period of time. Also, they tend to fire at past firing angles, the more they fired at a given angle, the more likely they will fire at that angle in the future.

Pattern matchers are unpredictable, unless you know their exact settings. The only thing you can do is to be unpredictable as well, and add some noise in your movement.

The main stream wave surfing is mainly assuming something simular to statistical guns. They keep track of enemy firings, then dodge them deliberately. Recorded firing angles are often weighted on frequency & elapsed time.

This approach works very well against statistical guns, and it's also good for dodging non-learning guns without VG array. However, its gain is also its weakness, e.g. pattern matchers work well, not to mention lightly segmented fast decay gun.

I think it's not hard to separate a non-learning gun from everything that learns, however, a statistical gun can not be separated that well from pattern matchers. Anyway, hit rate is always some good criteria, although maybe not the best.

Apart from enemy hit rate, another approach may be using a bunch of buffers (or trees), some keep tabs on hit, some keep tabs on visit. And as a criteria, not only use buffer "hit rate", but also miss rate, as miss rate is what affects enemy hit rate the most. And this is mostly what Tomcat does.

Xor (talk)13:33, 14 September 2017
 
 
 
 

Yeah, congrats for the strong vote and PL! It's actually interesting what your list of problem bots are, you have some interesting ones like SilverSurfer and SniperFrog. I guess your unique movement makes you susceptible to different specific types of guns.

Also I think vote is a really interesting score metric. While it does say how strong your bot is, it also says how unique your bot is, since you only need to be best by a very small margin, and against a specific set of bots. Getting vote for a small set of bots is hard. Getting vote much higher is really, really hard.

Skilgannon (talk)17:59, 13 September 2017

Thanks a lot! And it also has a lot of interesting problems bots which is relatively simple, e.g. FloodHT, which segments simply on whether near wall or not, and with some traditional ones like distance. No fancy attributes, no decay, but it hits me well ;)

Agree that vote is really hard to earn, and it is often earned with surprise ;) Maybe what contributes to vote the most is movement, when you dodge effectively. But a really strong gun which is able to exploit some hidden weakness may also help.

Anyway, ScalarBot's gun is unchanged at all since initial release, and it also has a lot of bugs I'm not fixing now ;) Maybe some bug also make its targeting unique, giving some bot a trouble.

Xor (talk)02:36, 14 September 2017
 

And... to the top 10 with 8 pairings left! Just wanted to register this moment :)

Rsalesc (talk)22:00, 13 September 2017

Thanks! It's simply because I added a more traditional view to dodge, which works surprisingly well ;) Being unique is not always a good thing;)

Xor (talk)02:38, 14 September 2017
 
 

Adding Bullet Shadowing

Thanks for that! Yes I have been doing bullet shadowing since 0.012i, and that really helped my PL (and APS, definitely). Since I don't have anything anti-surfer or anti-anti-surfer (e.g. flattener), bullet shadow is a good way to poison opponent's data imo.

Xor (talk)16:15, 4 October 2017