Archived talk:Neural Targeting 20071112

From Robowiki
Jump to navigation Jump to search

From old wiki's NeuralTargeting page

How it works (one of the many possible approaches)

First some mathematics:

There is the famous Tankens demonstration that says (basically) that given a deterministic serie x(1), x(2), ... , x(t) there is an embed E (vector lenght) a delay D (time delay) and and a function F such that:

 x(t) = F(x(t-1*D), x(t-2*D), ... , x(t-E*D))

By knowing E, D and F you could predict the future values of the series (ie. future enemy velocity and heading change). But the problem is that you don't know E, nor D, nor F, and you are not sure these series are deterministic!!!

Now some engineering:

Forget about determinism, just assume they are. Pick up some arbitrary E and D (ie. 20 and 2) and adjust them by trial and error. The only part remaining is F. We will use a neural net to aproximate it, and will train the net with the enemy bot movements.

The result is as follows:

  1. We will try to predict one-step ahead velocity and heading change for the enemy.
  2. Pick up an embed and a delay.
  3. Create a neural net with embed*2 input nodes, an arbitrary number of hidden nodes and 2 output nodes.
  4. Train the network by feeding the first half of the nodes with the velocity of previous turns (consider here the delay) and the second half with the previous heading changes. As outputs, put the velocity and heading change of the current turn.
  5. When you are about to fire, use the recent history to calculate future velocity and change by feeding it into the trained network. Note that because you are predicting the one-step-ahead variables, it will be an iterative process, that at some point will use the estimates as inputs for the network.
  6. Reconstruct the expected path for the enemy using the predicted velocity and heading change and aim your gun. Fire!

-- Albert


Comments

Unfortunately, the assumption that bots' movements are deterministic is just plain wrong. All good bots use a good amount of indeterminate (random) movement, and this is precisely why I have given up on neural networks and FIR filters for enemy movement prediction (the networks always diverge because of the indeterminancy). The only use I see now for neural networks in a bot is as classfiers (see Kohonen networks), especially for putting enemy movement into different categories that can then be studied by a statistical representation of a bot's movement (i.e. virtual bullets or wave systems).

I am currently working on a bot that categorizes virtual bullet buckets by pattern-matching. I'm not sure if anyone has tried this yet, but it seems to show promise so far.

-- nano

Just take a look into BestPSpace, it has a similar approach.

-- Albert

This neural net thing is really fun. Thanks for pointing me to the NRJLIB. Now I am trying to squeeze it into a MicroBot. It can be done, but it still doesn't beat the pattern-matchers. Well, I have some long days ahead :-) -- Tobe


Some tests related to NeuralTargeting

I just uploaded the following image to demonstrate how effective NeuralTargeting can be:

http://students.cs.byu.edu/~kawigi/robots/aiming.gif

Data is from ScruchiPu ploted using RobocodeGLV014. The green dots are the predicted bearing (to fire) and the white lines are the right bearing to hit (ie. when the green dots are between the lines, you will hit). Graphs are for SpinBot, Moebius and Sandbox. The first two have an almost deterministic behaviour, and ScruchiPu gets almost a 100% hit rate on them. Sandbox has a quite complex movement, with lots of randomnes, but the NN still gets some good prediction power (of course, some times fails completely, and other times it only manages to get the general movement but in an unprecise way...) -- Albert

I downloaded the picture onto my webspace at school that doesn't mind cross-linking (it won't show up the first time if you link a picture on Geocities from another website). -- Kawigi

Is this a graph of leading factor (from -1 to 1 where one is maximum lead) over time? If so, how many rounds did you have to let your gun run to get these results? If this is the first round, I'm switching guns :) -- Miked0801

It shows in green the relative bearing to fire over time (relativeBearing + enemyBearing = GunBearing). It ranges from (-0.81 to 0.81) which is the maximum a bot can travel before a bullet hits it. The white lines are the bearing you should fire to hit it. For spinbot or moebius, it needs no more than one round (well, and additional one makes it perfect) to learn. Don't be impressed by the graphics. As you can see, NeuralTargeting is really effective with "deterministic" patterns even if complex, but it needs to improve when dealing with more random movements... -- Albert

Could you make a graph like that on Ares as well? Since it has some quite determenistic patterns, but seems to have quite a few of them. Orca can easily learn it's StopAndGo thingy, but it's wide oscillation movement is much harder. -- PEZ


Just take a look into http://www.geocities.com/albert_pv/images.html


ScruchiPu beats Ares 6030-2539. A curious effect is that the more the net lears about oscillating and corners movement, the worst performs agains stop and go movement (it gets almost a 100% on it during the first rounds, then degrades). Probably it is some kind of interference between patterns. The good thing is that it seems that the network is able to store differnt patterns at once, and use them as needed. FYI, the network I'm using is a little bit different from ScruchiPu 0.7, because I tried it with development version. -- Albert

Thanks! I haven't had the patiance yet to run that many rounds against any bot. And Orca's movement sucks so I can't compare wins really. But I have seen the same behaviour with almost 100% on the stop-and-go first and then it degrades. Are you saying that your network manages to gain back it's perfomance on the stop-and-go if you just give it some time? -- PEZ

More questions. Does ScruchiPu get that 6030-2539 score against Ares when starting from scratch? Orca is currently running a 6000 round match (don't ask why 6000, my tired head misuncerstood your score figures...). Anyway, after 1400 rounds it shows no signs of improving its lousy score, 250 wins only. I'm confident Orca will win on points, but the fact remains that Orca's NN doesn't seem to cut it against Ares. I will come back with whatever conclusions I can draw when the battle is finished. -- PEZ


... well, I'm running it on a laptop computer of some age so I interupted the battle after 3368 rounds of which Orca won only 593. No signs of improvement. I think maybe my simplistic approach to NN might be too simple for any serious bots. But I'll experiment some more. If anyone has any suggestion son what inputs they think would be worth trying please holler at me. -- PEZ

Yes, the results are for the first 50 matches (no data stored previously). It takes no more than 10 round to have a good aiming against Ares (for some bots you need more rounds - against DuelistMini, for example, it seems to keep learning for 50-100 rounds). I don't think NN performance will improve after many rounds, unless you create a topology "big" enough to store all the patterns Ares uses. My development version of ScruchiPu is currently using a 120-15-15-2 topology (input-hidden-hidden-output). Note that the number of entry nodes will somehow define the width of the movements your bot will be able to recognize, and the number of hidden layers and nodes the number and complexity of the patterns it can recognize (as an inconvenient, the more layers and nodes, the slower it learns). Another factor that may be we should consider is the learning rate (you can modify it using the NrPop class). I think the default learning factor is quite big, so it could be interesting to reduce it after the first rounds, to try catching subtler patterns. Quite interesting ... --Albert

Ahhhh it was only 50 round! I thought those figures was number of wins! Gah! I'll run 50 rounds and just check the score then. =) Thanks for telling about how the topology will affect the behaviour of the net. I'm trying with more inputs now. -- PEZ

One thing to keep in mind when tweaking the network topology is that the nets easily can get rather big when on file. Even with Albert's zip addition I have managed to create saved files of about 75kb! I now have a smaller net which "only" takes 8kb. It's in the new Orca just uploaded to the RobocodeRepository. -- PEZ

Which version of Moebius is that? Each version moves in different ways (Earlier version moved in a near box or triangle, later in a random sine oscollation.) -- Miked0801

I can't remember, but was moving in triangles. -- Albert

Ok then, not the newest which oscollates at 90 degrees bearing with a bit of random thrown into the period and velocity. I'd be curious to see that graph ;) --Miked0801


Some considerations related with NN performance.

In response to the comment waaay up top there, Albert, don't undersell your creation. :) One of the strong points of neural nets is that they can take fuzzy input and return hard numbers (similarly to statistics), so if patterns aren't entirely deterministic it shouldn't matter, the randomness will weed itself out given some time to learn. However, I've been running ScruchiPu against just about everything in my library, and I've noticed a trend you might be able to reproduce: he peaks in targetting within the first 50 or so rounds, then he actually begins to target less and less effectively. If I had to take a shot in the dark, I'd say he's overfitting, that is, memorizing exact patterns instead of learning general trends.

You can combat that a little with the error term, and more effectively with some others methods, see: ftp://ftp.sas.com/pub/neural/FAQ3.html#A_over and http://www.mathworks.com/access/helpdesk/help/toolbox/nnet/backpr15.shtml for something more in-depth on it. -- Kuuran

Hmmm, there seems to be a lot to learn if the net should be effective. I did manage to make one that quickly learnt the movement of Walls. Then it turned out that it learnt the movement equally well if I fed random numbers as input. Very good! The output should be constant... -- Tobe

I saw that performance downgraded after some time, but I didn't want to believe it. Thanks for pointing it. I'w take a look to the articles and see if I can improve the net. I didn't expected it to overfit, since I use every observation only once to train the NN. -- Albert

This happens to Orca as well. Can someone tell me how I could interpret the error value returned by the training methods? It seems it should be possible to use it to know when to stop training the net. -- PEZ

The error is the (squared?) difference between the expected outputs and the predicted values. The error decreases as you train the network, but the decrease slows down at a certain point. You could define a threshold and stop training the network when the decrease is less than this. Note that this method is quite empirical, and you will not be sure if you stoped at the right time. -- Albert

Glad to be of potential assistance ;) I don't know for sure if it's overfitting, but that's my guess. I guess the way to find out would be introduce a method of combatting overfitting and see what happens. I find myself in a position where I can argue both that it is and isn't overfitting, so I'm afraid it's nothing more than a general suggestion.

Umm.. the error threshold stopping method will work to a degree, but as you said, it's rather difficult to make a solid bot from such an empirical method, unless you want to manually tweak it for every opponent ;p Personally the weight decay (see articles) method looks appealing to me, but I also saw an article somewhere outlining a method that determined if the function was both "trained enough" and "general enough" yet, then stopped learning. It might've been Bayesian learning, which does something like that, but I can't recall the details of either well enough to match them up. I'm not sure if learning would re-start upon a switch to an entirely new opponent movement that hadn't been seen before, though, and if it doesn't you have some potential for serious problems :( -- Kuuran

When I was building a bot using neural nets, one of the main features was that it used hot-swappable nets. If you have one movement type figured out, and the bot switches to a completely new movement type, you shouldn't destroy your old neural net to learn the new movement. You should save the old one, and start over with a fresh neural net (or better, a duplicate of a saved neural net that predicts the new movement the best). You periodically test your neural nets, and keep switching to the one with the lowest average error. If none of the neural nets has a low enough error, you create a new one for the particular type of movement your enemy is using, based on a duplicate of the neural net with the lowest error. Make sense? This would make ScruchiPu much more effective against bots like PrairieWolf that change movements constantly. Although ScruchiPu learns fast enough that it might not be a huge gain, it would help some I think. -- nano

A version of Orca (called OrcaM at the moment) tries this. Though I never figured out a way to choose starting on a new net. I guess the lowest squared error could be the way to do it. You can't afford training the saved net when you suspect it is a different movement, can you? Now, I think Orca proves that I do something very wrong with my attempts on neural targeting all together so I guess I shouldn't draw to large conclusions on the failiures of OrcaM. =) I'll open source it and let you others have a go at it. Watch this space. -- PEZ

OK. It's on the loose now. Download it from the repository. Source is included. I'll write more about it tonight. -- PEZ

I'm not sure you need different networks for different movements. In theory, the network should be able to identify the correct patterns and predict accordingly. See the graphs above (there is a link to display them) about Ares; the same network is able to predict the 3 patterns. The only problem I see for bots using multi-movement is that the NN can "forget" a type of movement, if it has not been used for a long time and you keep training the network (just because it will adapt to the new movements). But this problem could be overcomed by storing some old data and "refreshing" the network by introducing it into the training process. -- Albert

Yes, and even with several networks we still have the problem with how to choose the right network to train and not destroy a network aimed at another movement style... Anyway OrcaM seems to be quite effective against some enemies. Maybe it's just that it quickly can disregard any network that has "gone astray"? Try it against GlowBlow for instance. I just posted the source on OrcaM/Code. -- PEZ

Different networks for different movements IS an improvement on a single network, because neural-networks have a finite amount of "memory". If a bot uses many different kinds of movements, the neural network will have a tougher and tougher time remembering them all (due to overlap, and the difficulty of having a finite number of weights store a large number of patterns). If all of the weights in a network are used to store a single pattern, the end result will be more accurate than storing many patterns at once. -- nano

But good luck finding the rule for switching nets. I had this running through my mind for awhile and haven't come to a solution yet. Unlike VirtualGuns, you can't simply aim a net and see what hits, you have to figure out which net to start teaching without knowing what the end result is supposed to look like, where the end result is what's telling you what net to pick. Basically, since the only really good selection criteria for which net is which pattern the opposition bot is on, and determining that requires having the net already trained, this isn't the winning approach. -- Kuuran (cont'd after following section)


I thing NN are very good at classification and function estimation. The targetting uses nonlinear function aproxsimation. But you can also use an other NN for classification of movements and than use the other NN to fire. This could be an intresting apprach to classify the movements. SSO

Yup, I think movement classification could be one of the very best uses of NNs. -- PEZ

I'm now very busy but i like to contribute. There is an open source NN package http://sourceforge.net/projects/jsomap/. It is SOM type of NN which is good for classifcation. May be it is possible to make an implementaion by using that package and an open bot which uses pattern matching. The pattern in puts can be the inputs for training. SSO

Please hack away on OrcaM. -- PEZ



The only really viable solution I see in the direction of multiple nets is an augmented VirtualGuns system where each net is tied to a gun, and whichever gun is hitting the most has a net associated with it activated. That way you have customize each gun on the fly and learn, the advantages of the neural net, with the selection advantages of the VirtualGuns. You could fire from both, teach the net, and when the net beats the gun in hit rate switch over. Any pattern that your VirtualGuns can't figure out would just go to your direct or pattern guns, and those nets would be the fallback nets for all the weird movements that you can't really categorize. It might actually be a useful system to try, but it's definitely not going into my todo list. -- Kuuran

This is in parts what OrcaM does. It tries to overcome the problem with look-ahead by saving a back-up of each net before starting a training session. Then if it appears to be the wrong pattern it restores the old net. The mechanism for deciding if it was the right network or not is very clumsy at the moment, but I think that someone with a better understanding of the neural network thing than I have could figure out a more theoretically sound solution. -- PEZ

I just found this page and enjoyed reading it. About the problem of identifying the perfect NN for the current opposition, why not just let the NN find it out. Isn?t it possible to add the hashed name of the targeted robot to the input of the neural network? -- Dries

Well, it's not easy, Certainly possible. But that's not the problem I think. The problem is momentarily in the fight to pick the best NN for the given situation. (Like enemy close to the corner, have just moved leftwards and towards you and at a rather long distance...). See SSOs comment about using NNs for classification. I think that is the way to go. -- PEZ

I don't like that solution at all. In order to classify the movement they have to be able to learn which movement is which, which brings you back to the problem of setting them up to learn that. In the solution to that setup probably lies a much simpler system for the classification in the first place. -- Kuuran

Well, the classification phase could be carried out in the lab before the bot gets out in real battles. And it could maybe be quite iterative and use same Temporal difference kind of mechanism to give feedback about which choice was good and which was bad. Anyway, since so many bots are open source you can maybe create a mega-multi-mode bot that sports lots and lots of movement systems. Then maybe you could use the Team interfaces for the classifyer and the target. The target could pick movements at random and tell the classifyer what movement it uses and the classifyer could use this correct answer to train its net. I'm certainly ready to contribute to the movement-bot as long as one or some of you rocket scientists take care of the classifier business. -- PEZ

The SOM type of NN is differrent than the BPN type. I find some where a simple difference: "One of the most interesting aspects of SOMs is that they learn to classify data without supervision. You may already be aware of supervised training techniques such as backpropagation where the training data consists of vector pairs - an input vector and a target vector. With this approach an input vector is presented to the network (typically a multilayer feedforward network) and the output is compared with the target vector. If they differ, the weights of the network are altered slightly to reduce the error in the output. This is repeated many times and with many sets of vector pairs until the network gives the desired output. Training a SOM however, requires no target vector. A SOM learns to classify the training data without any external supervision whatsoever." Than the question remains; based on which criterias we shuold classifiy the movements. An intresting criteria could be based on the enemy distance away from us by using delta speed, delta angle, etc. (SImmilar to the patterns matchers) -- SSO

An example of a self-organizing map (SOM) is the Kohonen network, which I mentioned earlier, and which is specifically designed to segment data into classifications without supervision. This is what I meant when I said that I believed this is the direction that neural networks should move in bots. Props to the first person to implement a Kohonen map to categorize robot movement. -- nano

Well, unsupervised classification sounds great for the purpose. Maybe we could use input similar to what ScruchiPu uses today? -- PEZ

the version 0.7 uses

 //Input Vel. Scale factor
 //Input Hed. Scale factor
 //Output Vel. Scale factor
 //Output Hed. Scale factor

may be we need more info The questions are -what parameters we can use to classify the movements

 direction, distance, heading difference, location, ....

-what are the possible move types

 ossilation, lineer,...

Any more inprovements? SSO

Maybe since the classification is unsupervised and automatic it doesn't matter what possible types of movents there are? Wouldn't the output just be like movement1, movement2 etc and then we could just train specific targeting neural nets based on the classification? More possible inputs:

  • LateralVelocity
  • AdvancingVelocity
  • Proximity to walls
  • Distance from center
  • Velocity
  • Acceleration
  • ...

-- PEZ

Kohonen's aren't the most reliable classification by far. Play with this awhile and you'll see it takes minimal modification for it to misclassify everything, and it's pre-trained. -- Kuuran

Second note: You may also have far more luck with a Hopfield than a Kohonen. Kohonens are for pattern classification like identifying a noisy image of a letter as being the letter. Hopfields are better for associating patterns, and are also unsupervised learning nets. Hopfields are also far simpler. -- Kuuran

A Kohonen network is exactly what is called for to solve this problem. Hopfield networks are for pattern recognition. Kohonen networks are for pattern classification. Hopfield networks aren't meant to classify -- they're meant to reproduce the closest stored pattern to the input. The Kohonen network you linked to does not misclassify based on noisy input. On the contrary, it correctly returns the closest classification for the given input, regardless of noise. If you look at the number of pixel differences between the inputs, you will see that the network is correct. -- nano

I found several instances where it misfired, but that's all academic. Before you go on too much on the topic, try to set one up and see the details unfold. I think you'll see where my misgivings lie. -- Kuuran


Okay, I donno much about neural networks, but couldn't you say have a different network for each of the like values you get from the enemy then just recombine them to fire sorta like segmentation in a guess factor gun. e.g. a network to learn and monitor the bots distance, one for the acceleration, one for how it turns and stuff, then one network to pull it togeather and make the decision how to combine all that neural know how.. to me as far as what I know about NN this sounds possible but as I haven't even rad a single one of the aritcles I wouldn't know. Just putting in my 2 cents. -- Chase-san

Isn't this similar to PM? --Starrynte

Neural networks are effectively function estimators. So... given a set of parameters (distance, velocity, accelaration, etc...), a properly constructed and trained neural network might be able to tell you the next likely robot bearing. Neural networks are mostly another way of modeling probability. They're definitely not like pattern matching. Simple pattern matching is deterministic - you either find a finite match or you don't. Neural networks are 'fuzzier'. Academically, there's no good way to tell why they're working or not working. You just have to experiment. Good luck. --Corbos

All in all, getting NeuralTargeting to work is a long and tendious process, and it takes even longer to get them to work "right" if there is such a thing for it, currently I see neural networks as just a play thing for people with far to much free time (like me) or people who can really make them work (like wcsv). If your just starting out, and haven't worked on Neural Networks in some other capacity, I would try some of the other options first. But thats just my opinion and you can do whatever you wish in whatever order you please. -- Chase-san

I have done some more work with Kohonen map's lately, as I feel that I am getting better at udnerstanding these advanced concepts, I can tackle on of the biggest. I have an idea for a Kohonen map that will replace the bins in a traditional gf gun. It will also be a little like DC aswell, as the weights will be recorded as numbers 0 to 1. I wanna see if this will work, however, I have a stumbling point, though while it keeps the data, on movement and relates it to gf, I am wondering how to effectively store the gf in these, as 'pulling' the gf closer to the bmu may vary well turn out to be the very worst possible thing to do, so I will try it, but any alternatives to this very sloppy move will be appreciated. --Chase-san

  • I would love to offer a suggestion, but I'm afraid I have no idea about nueural networks. As far as I know "bmu" is "Boston, Mass. University". -- Simonton
  • I don't completely follow what you say in your comment, but I have studied Neural Networks in a class before. I think the way that Engineer does it makes a lot of sense - you have a network with inputs being the attributes and output being equal to the number of GuessFactor bins. To train it, you feed it the attributes and the GuessFactor that hit (or hit you, for movement). I didn't learn about BMU's, but after looking it up, I don't think the GuessFactor would relate to the BMU at all - the BMU has to do with the neuron's weights' similarity to the current training input, which is the attributes, not the GF. I may well be misunderstanding that part of your comment, though. And who says a neural net is the "biggest" Machine Learning system? I found Support Vector Machines to be much more complex. =) -- Voidious
    • Hem, I made a typo, on was suppose to be one in my first post, one of the biggest. But that seems to put me back near square one, each node in the map would have weights for say, velocity, distance and so on, and then a output weight gf. The normal weights would start out randomized or if I feel like quicker learning, some sorta fun equal distrabution and the gf, obviously 0 (to aim center). Each of the normal weights would be push and pulled when a new input vector was added (they would be pulled towards the best matching node), and I was wondering how to deal with the output vector and if such should be done with it aswell, ergo pull the gf's to be more simular to the one the nearer the node is to the new bmu. But this is the sticking point, this I figure WILL NOT work, so I was asking for a better solution (stated in a way I can understand at this point :P). If I do it the engineer way I need to know how to adjust the output vector while training, so when I do a lookup I will get the best gf for that situation. --Chase-san

From old wiki's NeuralNetwork page

Automated Neural Networks (often ANN for short) are a very popular approach to Machine Learning (ML). ANNs are modeled after the brain in natures animals. ANNs consist of an input layer of neurons connected with synapses (or weights) to one or more hidden layers of neurons which are in turn connected to the output layer of neurons (the simplest networks at least). Each neuron uses an activation function that produces a known output for a given input (though the activation function can vary between neurons if I have understood things correctly). Neural networks are most often trained using known data (input and output) and it's those synapses/weights that are adjusted automatigically until the network produces correct (or close enough) output for the input data contained in the training set. Sorry if this doesn't make sense, I'm less than a layman in these subjects. Read more about ANNs here: ftp://ftp.sas.com/pub/neural/FAQ.html -- PEZ

How does this relate to Robocode then? Well, when I and Crippa checked the book stores for books on Artifical Intelligence in general and ANNs in particular, we found four meters of books and the bulk was about robots. I think there even was seven or eight books about LegoMindStorms alone! No books on Robocode yet, but it might be on its way. =) In theory at least it should be possible to train a neural network on guessing where enemy bots will be (and when) based on data like recent movement history (see PatternMatching), battle field boundaries, number of enemies... and whatever. I know of only one robot using this method and it is BlotBot by Joachim Hofer, which is a really tough robot to meet. It includes source code, GPL even, so anyone curious can take a look and use the code in their own bots. -- PEZ

Actually BlotBot doesn't use a neural net. An earlier bot of his, XBot, uses them, but it learns too slowly so he switched to more conventional pattern matching for BlotBot. --David Alves

Neural Nets are complicated, large, and tricky to get right. If you want to attempt to understand them better, or you're considering using them in your bots, I highly suggest the Wikipedia article and its listed sources. Good luck. This topic has stumped many great coders. --Speal


I have been experimenting with Simple Self Orginizing Maps (No such thing right!?). Here are my results of a simple color matching on a randomized 100x100 starting field, this ran through a loop 100 times. (I had a few underlaying sets of fors to locate the bmu, however, I have a neat idea for finding the bmu faster, even if its not the "best" one).

SSOM

I'm gonna take my time and do some reading on this subject.

I'm looking into a statistical process with Markov Chains, has anyone tried these before for targeting and/or movement? --Chase-san

Nope, but we did cover Hidden Markov Models in a Machine Learning course I took this past semester. Interesting stuff... Good luck. It's always good to explore some new statistical analysis methods. -- Voidious


In general as far as I read the only reliable way to use a markov chain in a robot would be almost exactly like a normal guessfactor gun, except that you would recalculate the probabilities each time, instead of just stacking up over time. So much for that idea huh. I have unexpectanly suddenly started understanding what the article was saying. ;) --Chase-san

You cannot post new threads to this discussion page because it has been protected from new threads, or you do not currently have permission to edit.

There are no threads on this page yet.