Difference between revisions of "Talk:Watermelon"

From Robowiki
Jump to navigation Jump to search
(One idea)
(→‎Flatteners: Norma-izing sounds like fun)
 
(7 intermediate revisions by 4 users not shown)
Line 14: Line 14:
  
 
Well, one idea I've had that I've never tried, was making a list of 5 to 15 "candidate" spots on the first wave, and for each of those, find the reachable GF range for the second wave, adding the lowest danger within the range, to the danger of that candidate spot. --[[User:Rednaxela|Rednaxela]] 23:03, 14 June 2009 (UTC)
 
Well, one idea I've had that I've never tried, was making a list of 5 to 15 "candidate" spots on the first wave, and for each of those, find the reachable GF range for the second wave, adding the lowest danger within the range, to the danger of that candidate spot. --[[User:Rednaxela|Rednaxela]] 23:03, 14 June 2009 (UTC)
 +
 +
Rednaxela, that doesn't cheap by the way. I think the way you mentioned is the cheapest way. Just weight it by inverse distance squared should make it behave better. I actually have faced that problem too. My BlackHole, when the forwardDanger and reverseDanger are equals, it choose the backward so after the first hit, first wave will have cw danger but the later will have ccw later, cause my bot to have at HOT =) &raquo; <span style="font-size:0.9em;color:darkgreen;">[[User:Nat|Nat]] | [[User_talk:Nat|Talk]]</span> &raquo; 12:50, 15 June 2009 (UTC)
 +
 +
: Well, current version of RougeDC is reasonably fast I think, and when there are 3 waves out there evaluates 9 paths after branching. What I purpose with "candidate" spots would only require 2 branches for each spot, so with 5 "candidate" spots from which to check the second wave would be about the same speed as current RougeDC, which is not really a slow bot compared to some. --[[User:Rednaxela|Rednaxela]] 13:05, 15 June 2009 (UTC)
 +
 +
I like the suggestion about using inverse distance squared; I'll try that. I checked DrussGT's old wiki page and he mentions something very useful - he rolls with a smaller depth at higher segmentation, so that the segments that get data more often are also rolled more accurately. Maybe a good way to do this would be a time-based rolling average, where each segment knows how long since it has been updated, and data is weighted less heavily depending on how old it is. -- [[User:Synapse|<font style="font-size:0.8em;font-variant:small-caps;">Synapse</font>]] 22:32, 15 June 2009 (UTC)
 +
 +
== Flatteners ==
 +
I made a flattener. How do I know when to turn it on? -- [[User:Synapse|<font style="font-size:0.8em;font-variant:small-caps;">Synapse</font>]] 21:20, 18 June 2009 (UTC)
 +
 +
Against any gun where it will gain you points. =) In general, this means a strong or fast-adapting gun. The best way I know of (yet!) is to track enemy hit percentage, normalize it based on raw probability to hit (ie, adjusting for distance, bullet power, etc), and activate the flattener above a certain threshold. You'll see big gains against bots like [[CassiusClay]] and [[Ascendant]], but it's detrimental against lesser guns, so tuning your decision to turn it on is ''very'' touchy. --[[User:Voidious|Voidious]] 22:40, 18 June 2009 (UTC)
 +
 +
And the ideal threshold, is basically the normaized probability that random targeting would have against you, plus a safety margin. Choosing the appropriate safety margin is the key. Well... and also, choosing the how much data you need before you even consider the flattener is important. Also, in some cases it may be helpful to make the normalized hitrate a rolling average, since the value in early match may not mean so much. I don't have so much practical experience with flattener-enabling as [[User:Voidious|Voidious]] has though, but those are some thoughts on the details. --[[User:Rednaxela|Rednaxela]] 23:54, 18 June 2009 (UTC)
 +
 +
I know you meant ''normalized'' but I love the idea of a mathematical procedure called ''Norma-izing''. I don't know what it would do but you know it'd be interesting. Thanks for the useful tips :) -- [[User:Synapse|<font style="font-size:0.8em;font-variant:small-caps;">Synapse</font>]] 02:35, 19 June 2009 (UTC)

Latest revision as of 03:35, 19 June 2009

Saving Data

My suggestion with "Saving Data" is, don't. I've never bothered with it because the gains made are rather small, AND perhaps most importantly, it makes the effectiveness of your bot vary with stupid factors like how many rumble clients are running, since the data isn't shared between clients. In fact, I'd personally vote to ban saving data between battles in the rumble (at least until it's possible for clients to share data to make it actually fair). Really... it's just not worth the hassle... --Rednaxela 23:56, 11 June 2009 (UTC)

Thanks for the input! I'm glad to hear this reinforcement of my suspicions. Barring some convincing evidence to the contrary, my mind's pretty well made up. -- Synapse 03:52, 12 June 2009 (UTC)
My next release should make it up even further :) --Miked080104:07, 12 June 2009 (UTC)

Surfing Multiple Waves

My current strategy for multi-wave surfing is to proportionally add the danger from the next wave, varying by how soon the next wave is arriving and by its relative power. What I've observed is that while this does help when the second wave is close to the current one, it causes the bot to behave... tenatively. It will dodge the hot spot on the current wave by just enough to clear the worst of the factors, then hover in place, having found a minimum point where it would be reasonably safe on both waves. The thing is, it could do better by travelling further from the safe point on the current wave, then swinging back to a clear point on the next wave. The current behavior also sometimes prevents the bot from seeing other good solutions that involve movement after the current wave is past.

Is there a "cheap" solution that doesn't involve branching my prediction at each tick in the future? -- Synapse 22:47, 14 June 2009 (UTC)

Well, one idea I've had that I've never tried, was making a list of 5 to 15 "candidate" spots on the first wave, and for each of those, find the reachable GF range for the second wave, adding the lowest danger within the range, to the danger of that candidate spot. --Rednaxela 23:03, 14 June 2009 (UTC)

Rednaxela, that doesn't cheap by the way. I think the way you mentioned is the cheapest way. Just weight it by inverse distance squared should make it behave better. I actually have faced that problem too. My BlackHole, when the forwardDanger and reverseDanger are equals, it choose the backward so after the first hit, first wave will have cw danger but the later will have ccw later, cause my bot to have at HOT =) » Nat | Talk » 12:50, 15 June 2009 (UTC)

Well, current version of RougeDC is reasonably fast I think, and when there are 3 waves out there evaluates 9 paths after branching. What I purpose with "candidate" spots would only require 2 branches for each spot, so with 5 "candidate" spots from which to check the second wave would be about the same speed as current RougeDC, which is not really a slow bot compared to some. --Rednaxela 13:05, 15 June 2009 (UTC)

I like the suggestion about using inverse distance squared; I'll try that. I checked DrussGT's old wiki page and he mentions something very useful - he rolls with a smaller depth at higher segmentation, so that the segments that get data more often are also rolled more accurately. Maybe a good way to do this would be a time-based rolling average, where each segment knows how long since it has been updated, and data is weighted less heavily depending on how old it is. -- Synapse 22:32, 15 June 2009 (UTC)

Flatteners

I made a flattener. How do I know when to turn it on? -- Synapse 21:20, 18 June 2009 (UTC)

Against any gun where it will gain you points. =) In general, this means a strong or fast-adapting gun. The best way I know of (yet!) is to track enemy hit percentage, normalize it based on raw probability to hit (ie, adjusting for distance, bullet power, etc), and activate the flattener above a certain threshold. You'll see big gains against bots like CassiusClay and Ascendant, but it's detrimental against lesser guns, so tuning your decision to turn it on is very touchy. --Voidious 22:40, 18 June 2009 (UTC)

And the ideal threshold, is basically the normaized probability that random targeting would have against you, plus a safety margin. Choosing the appropriate safety margin is the key. Well... and also, choosing the how much data you need before you even consider the flattener is important. Also, in some cases it may be helpful to make the normalized hitrate a rolling average, since the value in early match may not mean so much. I don't have so much practical experience with flattener-enabling as Voidious has though, but those are some thoughts on the details. --Rednaxela 23:54, 18 June 2009 (UTC)

I know you meant normalized but I love the idea of a mathematical procedure called Norma-izing. I don't know what it would do but you know it'd be interesting. Thanks for the useful tips :) -- Synapse 02:35, 19 June 2009 (UTC)