User talk:Kev

From Robowiki
Revision as of 19:48, 28 September 2009 by Rednaxela (talk | contribs) (Welcoem back :))
Jump to navigation Jump to search

Dude, welcome back! Good to see ya. =) A couple things made me think of you recently:

  • We now have a Twin Duel division in the rumble.
  • At long last, someone made a higher ranking all-DC duelist than Hydra: Diamond. =)

How's things? Good luck with WaveSerpent 2.0... --Voidious 18:37, 28 September 2009 (UTC)

Hey Kev, nice to see an oldbie back. In addition to what Voidious listed, there's been a bit of a ruckus in the melee scene with Diamond and Portia topping Shadow, and Glacier aspiring to push Shadow down to #4 in melee soon. Looking forward to seeing how WaveSerpent 2.0 goes :) --Rednaxela 18:48, 28 September 2009 (UTC)

You cannot post new threads to this discussion page because it has been protected from new threads, or you do not currently have permission to edit.

Contents

Thread titleRepliesLast modified
Nice work with Figment214:57, 17 May 2024
BeepBoop seems to be losing ~1.5 APS1104:02, 8 February 2023
BeepBoop seems to be the new king1406:17, 17 January 2023
Welcome back!407:18, 21 May 2021

Nice work with Figment

That's a much cleaner port of Mirage's gun than I cobbled together although it looks like there are a few things you could nick from FireHammer to reduce Figment's size a little. I'll probably retire FireHammer later.

David414 (talk)13:16, 14 May 2024

Thanks! I think the biggest advantage of Figment vs FireHammer is keeping multiple choice -- it's really important for having Mirage's gun work well. But I also think Figment could probably be dethroned by a circular targeting bot with really good movement (i.e., a better-tuned Wallaby).

--Kev (talk)21:35, 16 May 2024

Yeah, dropping the multiple choice was painful.

I'd be interested to see if Mirage's gun could be squeezed enough to fit in a Capulet style minimum-risk corner movement, I reckon that'd be close to peak performance for the strong gun/simple movement approach to a micro bot.

Hopefully I'll find the time to finish off Quantum sooner rather than later. If the gun works out as well as I'm anticipating I suspect an improved version could be even better than circular targeting for a micro.

David414 (talk)14:57, 17 May 2024
 
 

BeepBoop seems to be losing ~1.5 APS

BeepBoopDiff.png

Comparing to the nearly identical 1.20, you can clearly see some random drops, which I suspect to be skipped turns on some old hardware.

The previous results of APS ~94.8 can be reproduced on my computer, so I think the previous results can be trusted.

@Beaming seems to be contributing to RoboRumble the most recently, we could work together to see if something could be done to ensure reproducibility of RoboRumble.

Xor (talk)08:33, 15 January 2023

I think I found a potential problem spot. One of my computers was 4 times slower and was using cpu constant from a 4 times faster computer. I recalculated cpu constant (by deleting the config file) and hope that the APS drop would resolve. It might explain why a better (subjectively) version of my bot in development performs worse than the old one.

It would be nice if rumble client recalculated CPU constant at the start. It take very little time and provides more stability. But I also recall discussion that active throttling in modern hardware makes this number just an estimate.

By the way in 2014 we had interesting discussion about Thread:User talk:Beaming/Smart bots competition allowing long calculation time for bots. Maybe it time to revive it since ML approaches developed quite a bit and they are CPU intensive.

From other hand making a top ranking fast bot is a challenge in itself.

Beaming (talk)19:51, 15 January 2023

I agree. Enforcing recomputing of CPU constant at the start and per e.g. 100 battles is necessary, as it highly affects results and is easy to get wrong. By recomputing periodically, it can also solve the problem of other heavy tasks that affects RoboRumble, without adding too much overhead.

I'm also thinking about adding some test set before allowing score submission, but that would be a long term plan.

I'll submit a PR for recomputing CPU constant, any suggestions are welcomed.

Xor (talk)10:51, 16 January 2023
 

I'm also interested in adding a separate rumble with long calculation time.

I'll add an option to multiply cpu constant by a factor (warning, advanced usage) from rumble config file, then *SmartRumble* could be realized. The initial participants could be copied from GigaRumble ;)

Xor (talk)10:57, 16 January 2023
 

I bought a low-end PC running Linux with 4 cores @ 1.33Ghz (cpu in 2016), and turbo-boost disabled. The cpu constant is 5x more than my master computer.

I tried to run the entire rumble with roborunner, two instances in parallel, (which takes ~20x time to complete, since I run 8 instances normally), and by far the scores look fine. So I guess what actually causes strange scores is indeed using inaccurate cpu constants.

Anyway, I haven't tried running other background tasks at the same time (because I don't have such tasks to run), so I'm not sure whether that affects the score as well.

Xor (talk)17:29, 24 January 2023
 

BeepBoop 1.21a seems to be losing only 0.1 APS now comparing to 1.2 (and 0.2 APS comparing to my local run).

However, there are still some pairings with weird score:

reeder.colin.WallGuy3 1.0
hs.SimpleHBot 1.3
darkcanuck.Holden 1.13a
tobe.Saturn lambda

I'm also running rumble client meanwhile, and couldn't find corresponding pairings from my logs.

@Beaming Could you please also have a look at the logs to see which machine is producing the weird scores?

I suspect most of the scores should be fine now, but some weird scores may still be producing under heavy load.

Xor (talk)03:34, 6 February 2023

Sure, but what should I look for in my logs? Is it even stored for long time? All I see is the last uploaded file.

Also, note that there uncertainties. 0.1 APS is not that much. Battles usually have 1-6 APS spread per pairing. Also, some bot keep logs, it might be that my setup has the longest (most complete) stored enemy visit counts logs.

Also, it is possible that original scoring was done on fast CPU where timeconstant was in favor of BeeBop.

But I also notice, that newly submitted bot start better, and then drop 0.5-1% aps as the rumble settles.

Beaming (talk)05:16, 6 February 2023

I run RoboRumble client with nohup, so I can just grep nohup.out. You can also use bash redirections to persist the log. Otherwise it's impossible to track down the weird scores.

The reason that bots drop 0.5-1% APS is because some battles are having insane results, greatly affecting final score.

When using controlled environment (both on newest hardware and low end 1.33Ghz processor), you get very stable score, getting less than 0.1 APS diff from 5000 battles to 20000 battles. This observation can also exclude the possibility of log saving bots. Logically, one battle is enough and increasing that to 20 battles aren't helping.

Xor (talk)08:59, 6 February 2023
 

Look at BeepBoop against reeder.colin.WallGuy3, it gets APS 81.28 instead of 98 with 4 battles. You can consider this as 3x 98 APS and 1x 30 APS. What happened when it gets 30 APS? I can only imagine a heavily loaded machine with much longer computation time than usual, and both participants are skipping a lot of turns due to insufficient time.

The problem of this is that you can never reproduce the result. It has nothing todo with uncertainties. Running BeepBoop against reeder.colin.WallGuy3 will always get ~98 APS as long as the environment is controlled. You never get to know what actually happened.

Xor (talk)09:07, 6 February 2023

I see your point, but we should not forget that there are probabilities involved. Maybe WallGuy3 get lucky once. For example, BeeBoop was spawn in the corner very close to the opponent.

Also, looking in the rumble logs is a bit useless without ability to correlate them with load of the system.

Ideally, we should solve it in the robocode client and not by running in pristine/controlled environment. None of us can dedicate computers to a single task. A potential solution is to have a thread which estimates CPU load (but even then there could be transient load spikes which might be undetected).

Beaming (talk)02:54, 8 February 2023
 
 
 
 
 

BeepBoop seems to be the new king

Congratulation! BeebBoop is at the very top.

Do you mind to hint about new top of the line research direction?

Beaming (talk)01:22, 26 December 2022

Congratulations (again) from me too ;) BeepBoop since 1.2 had very surprising results (nearly 95!!!). And yet nothing worked when I tried to use gradient descent in training models. Would you mind to share a little bit more about this section? E.g. initialization, learning rate, how to prevent getting zero or negative exponent in x^a formula…

Xor (talk)02:07, 26 December 2022

I’ve been meaning to release the code for the training, but it’s currently a huge mess and I’m pretty busy! In the meantime, here are some details that might help:

  • I initialized the powers to 1, biases to 0, and multipliers to a simple hand-made KNN formula.
  • I constrained the powers to be positive, so I guess the formula should really be written as w(x+b)^abs(a).
  • I used Adam with a learning rate 1e-3 for optimization.
  • Changing the KNN formula of course changes the nearest neighbors, so I alternated between training for a couple thousand steps and rebuilding the tree and making new examples.
  • For simplicity/efficiency, I used binning to build a histogram over GFs for an observation. Simply normalizing the histogram so it sums to 1 to get an output distribution doesn’t work that well (for one thing, it can produce very low probabilities if the kernel width is small). Instead, I used the output distribution softmax(t * log(histogram + abs(b))) where t and b are learned parameters initialized to 1 and 1e-4.
--Kev (talk)16:10, 3 January 2023

Thanks for the detailed explanation! It is not easy to get so many details right, which explained how mighty BeepBoop is, not to mention the innovations.

Xor (talk)04:57, 4 January 2023
 
 

Thanks! My guess for the next innovation that could improve bots is active bullet shadowing. Instead of always shooting at the angle your aiming model gives as most likely to hit, it is probably better to sometimes shoot at an angle that is less likely to hit if it creates helpful bullet shadows for you. This idea would especially help against strong surfers whose movements have really flat profiles (so there isn’t much benefit from aiming precisely). I never got around to implementing it, so it remains to be seen if it actually is useful!

--Kev (talk)16:02, 3 January 2023

Thanks for insights and ideas. Bullet shielder temped me a while ago. I thought that if one cat intercept a bullet wave close to the launch point, a bullet shadow will be big enough to slide in. But that required good knowledge of when a bullet will be fired. I guess it can be done similar to how its done in DrussGT which has a tree to predict the opponent bullet segmented on energy and distance. (At least I remember reading in wiki about some one way to predict an enemy wave this way). But my attempts to do it were not very successful.

By the way, could you repackage your bot with an older Java version? I am running the rumble but it fails on your bot complaining about

Can't load 'kc.mega.BeepBoop 1.21' because it is an invalid robot or team.

I think the current agreement that Java JDK v11 or lower is accepted. If you look at rumble stats, you would see that your bot has less battles then many others.

Beaming (talk)04:20, 4 January 2023

I think RoboRumble should be inclusive, which means the client should be run with the latest LTS version of Java to allow more Java versions to participate. LTS versions are also meant to be more stable, which help with more stable results.

I also updated the guide to suggest Java 17, which is the latest LTS version for now, instead of Java 11. Would you mind upgrading the Java version of your client?

Xor (talk)05:03, 4 January 2023

Sure. I am upgrading my clients to Java 17. Seems to be ok, except the warning about depreciated calls to

WARNING: System::setSecurityManager has been called by net.sf.robocode.host.security.RobocodeSecurityManager (file:/home/evmik/misc/robocode-1.9.4.2/libs/robocode.host-1.9.4.2.jar)
WARNING: Please consider reporting this to the maintainers of net.sf.robocode.host.security.RobocodeSecurityManager
WARNING: System::setSecurityManager will be removed in a future release

I think it is addressed in the newer robocode versions, but rumble still accept only 1.9.4.2

Beaming (talk)00:35, 5 January 2023

It is never addressed. Also, there's currently no solution after Java removes SecurityManager, other than sticking to Java 17 (or newer LTS versions still with SecurityManager). Tank Royale could be the long-term plan, but it is only possible after some cross-platform sandbox solution having been implemented.

Xor (talk)12:58, 13 January 2023
 

Btw, BeepBoop seems to be losing APS due to inconsistency in RoboRumble client (e.g. skipped turns).

BeepBoopDiff.png

http://literumble.appspot.com/BotCompare?game=roborumble&bota=kc.mega.BeepBoop%201.21&botb=kc.mega.BeepBoop%201.2&order=-Diff%20APS

BeepBoop runs fine on my computer, with the same result as (previous) RoboRumble and without skipped turns. Could you share some information about your environment, e.g. clients running in parallel, dedicated (not running any other task) or not. This may heavily affect reproducibility of RoboRumble.

Xor (talk)13:46, 13 January 2023
 
 
 
 
 

Welcome back!

It has been 9 years(!) since my first touching of robocode and I couldn’t remember when was my last attempt to recreate a competitive mega bot!

any way looking forward to a new challenger to all categories of rumbles!

Xor (talk)11:19, 12 May 2021

Thank you! I guess the robocode scene is quieter than it used to be, but it's nice to see many new strong bots in the rumble since I last checked!

--Kev (talk)19:06, 13 May 2021

Welcome back from me too ! You did a hell of an update to WaveShark going from #9 to #4 in microrumble.

It is indeed quite quiet, the time of more than 5 people updating a bot weekly really is over. Nevertheless, any involvement, certainly from someone who has proven to be one of the best, could spark a burst of activity again.

GrubbmGait (talk)15:31, 15 May 2021

Yeah, I'm happy to get WaveShark above Thorn! And congrats on the improvement to GresSuffard!

--Kev (talk)06:34, 17 May 2021

Welcome back! With all this activity, I'll eventually have to upgrade my own bots as well :)

Dsekercioglu (talk)07:18, 21 May 2021