View source for User talk:D414

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
My predictions for 2024516:07, 20 May 2024
What's the highest ranking non-learning bot in the rumble?715:34, 26 January 2024

My predictions for 2024

Having stumbled across the Chronicle pages I thought it could be fun to make some predictions for what might occur this year.

  • HawkOnFire will be outperformed by a nanobot for the first time (Prediction made privately to myself with the initial release of Quantum)
  • Shadow will be outperformed (in melee) by a minibot for the first time
  • Xor will still be waiting for somebody to dethrone ScalarR in the general melee rumble
  • Sheldor will take the crown in either micro or mini 1v1 (possibly both)
  • Everybody on the wiki will have had at least one birthday
David414 (talk)15:26, 17 May 2024

I’m pretty sure the last prediction will be true.

Xor (talk)02:02, 18 May 2024
 
  • Congrats. Curious about the new targeting.
  • Could happen because of refinements in Minimum Risk Movement since then. Mirage is only one point of APS away.
  • Sure, I haven't studied it closely, but it has great survival. So great in fact that it might be trading off APS in some situations.
  • Possible. String-based pattern matching in Java is just so efficient that I don't think Epeeist will be a top APS contender (PWIN is a different story). But I have been making some progress in expanding WaveShark's gun and understanding Multiple Choice.
  • With all the natural language tech nowadays it's hard to say.
Sheldor2 (talk)13:41, 19 May 2024

I have faith in you, it'd be great to see a few more crowns changing hands (especially to somebody other than Kev! :p). If you find some changes that could get Epeeist to the top but don't have the space I'd be happy to see if I can help squeeze it in.

Mirage looks like it could potentially be squeezed down to 12xx bytes (or further) without loss of performance, so that prediction feels like a question of when, rather than if.

The energy management in Quantum v0.2.4 was a bit of a flop but I have v0.2.3 down to 230 bytes so far which gives me a few easy options:

  • Add a check so that it doesn't fire if it would disable itself (Testing suggests that's worth around 0.2 APS)
  • Add a check against getGunTurnRemaining (Testing suggest that's worth around 0.1 APS)

I think I'll be able to fit both of those in for v0.3.0.

The interesting gun changes will require me to build a melee version of WaveSim if I'm going to have any hope of tuning them properly. The first thing I'd like to try is replacing the Math.pow call with a table, since that would allow me to tune the lead in the gun precisely and it also doesn't require any extra space.

Alternatively I could forgo the two changes above and add a table based gun that uses lateral velocity, lateral velocity last tick, distance, bullet power and getOthers as attributes. My gut says that gun could rival (or possibly even outperform) circular targeting if tuned well, but I'm not going to be building a melee version of WaveSim anytime soon so it's entirely hypothetical right now!

After all that I'd like to try table based wall forces, which if I recall correctly would only cost me a byte or two.

David414 (talk)15:38, 19 May 2024

I agree Mirage could be shrunk a lot, but I still think getting ahead of Shadow is a big challenge -- improving APS at that level is really difficult! For one thing, Mirage is way worse at 1v1s than Shadow, which means whenever it's paired with Shadow (or another surfer) it gets lots of 2nds and very few 1sts.

--Kev (talk)21:42, 19 May 2024

It would definitely be a challenge, and Mirage's movement and/or gun might not be the right approach, but I think Mirage is proof that it's possible. If somebody had suggested a mini could beat Shadow before you'd released Mirage I'd have laughed but not anymore!

David414 (talk)16:07, 20 May 2024
 
 
 
 

What's the highest ranking non-learning bot in the rumble?

Does anybody know which (non-learning) bot holds the highest ranking in the 1v1 and melee rumbles?

D414 (talk)12:13, 25 January 2024

What is your definition of non-learning? Do you mean perceptual?

IMO as long as you have state, you are in fact “learning”.

Xor (talk)12:35, 25 January 2024

I think about learning more like, results in the past can influence future decisions. f.e. Linear targeting only knows about current state (radarinfo), while Circular targeting knows about current state and previous state. Both do not learn, as similar states give similar decisions, without any correlation to past results.

GrubbmGait (talk)14:06, 25 January 2024

Well, we can loosen the limit of perceptual to allow information of k recent turns, e.g. k-perceptual. Under this definition, linear targeting will be 1-perceptual, circular targeting being 2-perceptual, and average velocity targeting with window size k will be k-perceptual.

As long as k is large enough, we can still make effective learning methods. So there’s still not an absolute difference between learning and non-learning. Simple enough methods like averaging velocity is still “learning”.

Xor (talk)15:41, 25 January 2024
 

This is pretty much what I had in mind, particularly "similar states give similar decisions". Another distinction I find helpful is the idea that past results should not influence decisions in the future but past states can. A gun that gathers statistics on hit/miss and adjusts its aim is learning but average velocity targeting is not.

D414 (talk)16:45, 25 January 2024

If average velocity (with window size k) is non-learning, how about play it forward using only k recent scans? Both are not changing behavior based on results, only recent scans, plus, play it forward is merely a more precise version of “averageing”.

IMO both are learning using k scans, the only difference is the latter is more precise, and using data more effectively.

Xor (talk)03:00, 26 January 2024

I definitely agree that they're both learning, at least in some sense. The main difference is that the accuracy of a linear targeting system would have diminishing returns as k is increased, whereas a PIF system would likely benefit from increased amounts of data.

It's certainly a difficult question to formalise. I suppose that plotting prediction accuracy vs. k would give some idea of how much learning is going on.

D414 (talk)15:34, 26 January 2024
 
 
 

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:User talk:D414/What's the highest ranking non-learning bot in the rumble?/reply (5).