User talk:Kev
Dude, welcome back! Good to see ya. =) A couple things made me think of you recently:
- We now have a Twin Duel division in the rumble.
- At long last, someone made a higher ranking all-DC duelist than Hydra: Diamond. =)
How's things? Good luck with WaveSerpent 2.0... --Voidious 18:37, 28 September 2009 (UTC)
Hey Kev, nice to see an oldbie back. In addition to what Voidious listed, another recent happening is that there's been a bit of a ruckus in the melee scene with Diamond and Portia topping Shadow, and Glacier aspiring to push Shadow down to #4 in melee soon. Looking forward to seeing how WaveSerpent 2.0 goes :) --Rednaxela 18:48, 28 September 2009 (UTC)
Thanks, it's good to be back =). I'm amazed to see that Logic has been pushed down to fourteenth (with a few duplicate entries out there). I might look into melee later, but I'm pretty busy right now... -- Kev
- [View source↑]
- [History↑]
Contents
| Thread title | Replies | Last modified |
|---|---|---|
| BeepBoop seems to be the new king | 14 | 06:17, 17 January 2023 |
| Welcome back! | 4 | 07:18, 21 May 2021 |
![]() First page |
![]() Previous page |
![]() Next page |
![]() Last page |
Congratulation! BeebBoop is at the very top.
Do you mind to hint about new top of the line research direction?
Congratulations (again) from me too ;) BeepBoop since 1.2 had very surprising results (nearly 95!!!). And yet nothing worked when I tried to use gradient descent in training models. Would you mind to share a little bit more about this section? E.g. initialization, learning rate, how to prevent getting zero or negative exponent in x^a formula…
I’ve been meaning to release the code for the training, but it’s currently a huge mess and I’m pretty busy! In the meantime, here are some details that might help:
- I initialized the powers to 1, biases to 0, and multipliers to a simple hand-made KNN formula.
- I constrained the powers to be positive, so I guess the formula should really be written as w(x+b)^abs(a).
- I used Adam with a learning rate 1e-3 for optimization.
- Changing the KNN formula of course changes the nearest neighbors, so I alternated between training for a couple thousand steps and rebuilding the tree and making new examples.
- For simplicity/efficiency, I used binning to build a histogram over GFs for an observation. Simply normalizing the histogram so it sums to 1 to get an output distribution doesn’t work that well (for one thing, it can produce very low probabilities if the kernel width is small). Instead, I used the output distribution softmax(t * log(histogram + abs(b))) where t and b are learned parameters initialized to 1 and 1e-4.
Thanks! My guess for the next innovation that could improve bots is active bullet shadowing. Instead of always shooting at the angle your aiming model gives as most likely to hit, it is probably better to sometimes shoot at an angle that is less likely to hit if it creates helpful bullet shadows for you. This idea would especially help against strong surfers whose movements have really flat profiles (so there isn’t much benefit from aiming precisely). I never got around to implementing it, so it remains to be seen if it actually is useful!
Thanks for insights and ideas. Bullet shielder temped me a while ago. I thought that if one cat intercept a bullet wave close to the launch point, a bullet shadow will be big enough to slide in. But that required good knowledge of when a bullet will be fired. I guess it can be done similar to how its done in DrussGT which has a tree to predict the opponent bullet segmented on energy and distance. (At least I remember reading in wiki about some one way to predict an enemy wave this way). But my attempts to do it were not very successful.
By the way, could you repackage your bot with an older Java version? I am running the rumble but it fails on your bot complaining about
Can't load 'kc.mega.BeepBoop 1.21' because it is an invalid robot or team.
I think the current agreement that Java JDK v11 or lower is accepted. If you look at rumble stats, you would see that your bot has less battles then many others.
I think RoboRumble should be inclusive, which means the client should be run with the latest LTS version of Java to allow more Java versions to participate. LTS versions are also meant to be more stable, which help with more stable results.
I also updated the guide to suggest Java 17, which is the latest LTS version for now, instead of Java 11. Would you mind upgrading the Java version of your client?
Sure. I am upgrading my clients to Java 17. Seems to be ok, except the warning about depreciated calls to
WARNING: System::setSecurityManager has been called by net.sf.robocode.host.security.RobocodeSecurityManager (file:/home/evmik/misc/robocode-1.9.4.2/libs/robocode.host-1.9.4.2.jar) WARNING: Please consider reporting this to the maintainers of net.sf.robocode.host.security.RobocodeSecurityManager WARNING: System::setSecurityManager will be removed in a future release
I think it is addressed in the newer robocode versions, but rumble still accept only 1.9.4.2
It is never addressed. Also, there's currently no solution after Java removes SecurityManager, other than sticking to Java 17 (or newer LTS versions still with SecurityManager). Tank Royale could be the long-term plan, but it is only possible after some cross-platform sandbox solution having been implemented.
Btw, BeepBoop seems to be losing APS due to inconsistency in RoboRumble client (e.g. skipped turns).
BeepBoop runs fine on my computer, with the same result as (previous) RoboRumble and without skipped turns. Could you share some information about your environment, e.g. clients running in parallel, dedicated (not running any other task) or not. This may heavily affect reproducibility of RoboRumble.
I presume you are asking me. Well, I have several old computers pushing 10+ years, with which I use to run many roborumble battles for many years. All of them are doing other useful cron jobs so performance is not guaranteed. Also since the modern cpu throtle whenever they feel so, it might bring more jitter.
An interesting observation, I made all rumble clients by copying the robocode folder. So they all have identical `robocode.properties` file with the following content
#Robocode Properties #Tue May 06 09:49:19 EDT 2014 robocode.cpu.constant=7434452 robocode.version.lastrun=1.9.2.0
The striking part is that the cpu constant is the same on all computers. Also note that version is misreported.
So this might be the problem, since some machines are slower than my master computer. But I doubt that they factor of 2 slower.
I used to run RoboRumble on very old computers as well, but I think as long as the cpu constant is actually computed on the corresponding hardware, the results can be trusted if no other tasks are running.
If the computer is under heavy load, I generally multiply the cpu constant by 10 to ensure no jitter.
Misreporting `robocode.version.lastrun` should be fine, the property is only there for robocode to regenerate robots.database. It should have been overridden to the real version by robocode to prevent regenerating database each time, but anyway this shouldn't have any impact on the results, only slowing down the initialization time.
Anyway we need to wait Kev to upload an identical new version in order to check whether the above countermeasures actually work... It would be great if we could get the same results on 10+ years hardware (with moderate load), greatly reducing the cost of running RoboRumble.
Happy to upload a new version! Do you know if I can just change the version number in RoboRumble/Participants or do I have to upload a new jar? I'm away from the computer I normally develop BeepBoop on for a week.
It has been 9 years(!) since my first touching of robocode and I couldn’t remember when was my last attempt to recreate a competitive mega bot!
any way looking forward to a new challenger to all categories of rumbles!
Thank you! I guess the robocode scene is quieter than it used to be, but it's nice to see many new strong bots in the rumble since I last checked!
Welcome back from me too ! You did a hell of an update to WaveShark going from #9 to #4 in microrumble.
It is indeed quite quiet, the time of more than 5 people updating a bot weekly really is over. Nevertheless, any involvement, certainly from someone who has proven to be one of the best, could spark a burst of activity again.
Yeah, I'm happy to get WaveShark above Thorn! And congrats on the improvement to GresSuffard!
Welcome back! With all this activity, I'll eventually have to upgrade my own bots as well :)
![]() First page |
![]() Previous page |
![]() Next page |
![]() Last page |




