User talk:D414
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
My predictions for 2024 | 5 | 17:07, 20 May 2024 |
What's the highest ranking non-learning bot in the rumble? | 7 | 16:34, 26 January 2024 |
Having stumbled across the Chronicle pages I thought it could be fun to make some predictions for what might occur this year.
- HawkOnFire will be outperformed by a nanobot for the first time (Prediction made privately to myself with the initial release of Quantum)
- Shadow will be outperformed (in melee) by a minibot for the first time
- Xor will still be waiting for somebody to dethrone ScalarR in the general melee rumble
- Sheldor will take the crown in either micro or mini 1v1 (possibly both)
- Everybody on the wiki will have had at least one birthday
- Congrats. Curious about the new targeting.
- Could happen because of refinements in Minimum Risk Movement since then. Mirage is only one point of APS away.
- Sure, I haven't studied it closely, but it has great survival. So great in fact that it might be trading off APS in some situations.
- Possible. String-based pattern matching in Java is just so efficient that I don't think Epeeist will be a top APS contender (PWIN is a different story). But I have been making some progress in expanding WaveShark's gun and understanding Multiple Choice.
- With all the natural language tech nowadays it's hard to say.
I have faith in you, it'd be great to see a few more crowns changing hands (especially to somebody other than Kev! :p). If you find some changes that could get Epeeist to the top but don't have the space I'd be happy to see if I can help squeeze it in.
Mirage looks like it could potentially be squeezed down to 12xx bytes (or further) without loss of performance, so that prediction feels like a question of when, rather than if.
The energy management in Quantum v0.2.4 was a bit of a flop but I have v0.2.3 down to 230 bytes so far which gives me a few easy options:
- Add a check so that it doesn't fire if it would disable itself (Testing suggests that's worth around 0.2 APS)
- Add a check against getGunTurnRemaining (Testing suggest that's worth around 0.1 APS)
I think I'll be able to fit both of those in for v0.3.0.
The interesting gun changes will require me to build a melee version of WaveSim if I'm going to have any hope of tuning them properly. The first thing I'd like to try is replacing the Math.pow call with a table, since that would allow me to tune the lead in the gun precisely and it also doesn't require any extra space.
Alternatively I could forgo the two changes above and add a table based gun that uses lateral velocity, lateral velocity last tick, distance, bullet power and getOthers as attributes. My gut says that gun could rival (or possibly even outperform) circular targeting if tuned well, but I'm not going to be building a melee version of WaveSim anytime soon so it's entirely hypothetical right now!
After all that I'd like to try table based wall forces, which if I recall correctly would only cost me a byte or two.
I agree Mirage could be shrunk a lot, but I still think getting ahead of Shadow is a big challenge -- improving APS at that level is really difficult! For one thing, Mirage is way worse at 1v1s than Shadow, which means whenever it's paired with Shadow (or another surfer) it gets lots of 2nds and very few 1sts.
Does anybody know which (non-learning) bot holds the highest ranking in the 1v1 and melee rumbles?
What is your definition of non-learning? Do you mean perceptual?
IMO as long as you have state, you are in fact “learning”.
I think about learning more like, results in the past can influence future decisions. f.e. Linear targeting only knows about current state (radarinfo), while Circular targeting knows about current state and previous state. Both do not learn, as similar states give similar decisions, without any correlation to past results.
Well, we can loosen the limit of perceptual to allow information of k recent turns, e.g. k-perceptual. Under this definition, linear targeting will be 1-perceptual, circular targeting being 2-perceptual, and average velocity targeting with window size k will be k-perceptual.
As long as k is large enough, we can still make effective learning methods. So there’s still not an absolute difference between learning and non-learning. Simple enough methods like averaging velocity is still “learning”.
This is pretty much what I had in mind, particularly "similar states give similar decisions". Another distinction I find helpful is the idea that past results should not influence decisions in the future but past states can. A gun that gathers statistics on hit/miss and adjusts its aim is learning but average velocity targeting is not.
If average velocity (with window size k) is non-learning, how about play it forward using only k recent scans? Both are not changing behavior based on results, only recent scans, plus, play it forward is merely a more precise version of “averageing”.
IMO both are learning using k scans, the only difference is the latter is more precise, and using data more effectively.
I definitely agree that they're both learning, at least in some sense. The main difference is that the accuracy of a linear targeting system would have diminishing returns as k is increased, whereas a PIF system would likely benefit from increased amounts of data.
It's certainly a difficult question to formalise. I suppose that plotting prediction accuracy vs. k would give some idea of how much learning is going on.
I think the best in the 'perceptual' / 'stateless' category is RetroGirl