Difference between revisions of "Talk:Neural Targeting"
(thank, more question) |
(and answer myself) |
||
Line 15: | Line 15: | ||
Thanks, even though I don't think I understand yet. Some more thing I want to know, what is the difference between back-propagation neural network (BPN) and feedforward neural network? --[[User:Nat|<span style="color:#099;">Nat</span>]] [[User talk:Nat|<span style="color:#0a5;">Pavasant</span>]] 12:44, 2 November 2009 (UTC) | Thanks, even though I don't think I understand yet. Some more thing I want to know, what is the difference between back-propagation neural network (BPN) and feedforward neural network? --[[User:Nat|<span style="color:#099;">Nat</span>]] [[User talk:Nat|<span style="color:#0a5;">Pavasant</span>]] 12:44, 2 November 2009 (UTC) | ||
+ | |||
+ | : Oh, just answer myself. back-propagation is training algorithm while feedforward is mapping (or whatever it call to get output) algorithm --[[User:Nat|<span style="color:#099;">Nat</span>]] [[User talk:Nat|<span style="color:#0a5;">Pavasant</span>]] 12:52, 2 November 2009 (UTC) |
Latest revision as of 13:52, 2 November 2009
This page could definitely be fleshed out a bit with information about other neural targeting bots, but I just don't know enough about most of them to write intelligently about them. Some of them are: Orca, OrcaM, GB, TheBrainPi, NeuralPremier, Fe4r, Chomsky, Gaia, and Thinker. It also strikes me as reading a bit too historical and not quite techy enough, but really, neural targeting is a pretty broad field, so I think it might be OK to have it like this and link to various other technical pages / discussions. --Voidious 19:23, 12 November 2007 (UTC)
- Both me and Wcsv used Self-Organized Maps (aka Kohonen Maps) as our Neural Network. I might release the code once I get it reformatted in a way that won't scare everyone off. --Chase-san 19:27, 12 November 2007 (UTC)
Can anyone told me briefly on how Neural Network and GHA algorithm work in simple (simpler than Wikipedia page) word? Tank you in advance. --Nat Pavasant 07:54, 1 November 2009 (UTC)
Interesting, I happened to start playing with Neural Networks yesterday. Currently I am experimenting with a 3-layer perceptron, which is a type of neural network (I believe). http://www.willamette.edu/~gorr/classes/cs449/figs/hidden1.gif is a simple diagram of a 3-layer perceptron. (Note that you could have more than 1 input; you could also have more than 1 hidden node, but for now we'll assume there is only one)
- Essentially, you feed it the input(s); each input, including the bias for the hidden node, is then multiplied by a weight and summed up.
- This is fed to the hidden node, which passes the sum through a function (typically something like tanh).
- This is then again, along with the bias for the output node, multiplied by their respective weights and summed up. That gives you your output
http://www.willamette.edu/~gorr/classes/cs449/intro.html has more information, specifically http://www.willamette.edu/~gorr/classes/cs449/multilayer.html for this 3-layer perceptron. As for training this...I'll have to explain that in another post. --Starrynte 15:47, 1 November 2009 (UTC)
Ahh neat, yeah, take a look at Kohonen Map! Cheers! --Chase 15:49, 1 November 2009 (UTC)
Thanks, even though I don't think I understand yet. Some more thing I want to know, what is the difference between back-propagation neural network (BPN) and feedforward neural network? --Nat Pavasant 12:44, 2 November 2009 (UTC)