NEURAL NETWORKING The program provided is an example of a nueral network implementing the Backpropagation paradigm. There is no full application of this program, it was only written as a learning experience I encountered with neural networking, however, backpropagation is the most widely used nueral net paradigm to my knowledge. The network provided, is based on the exclusive or (XOR) problem which is most noted for backpropagation. The program could be adapted for other problems as well. Yes, I do know that FORTRAN is not the best language for nueral networking or for any type of AI applications for the most part. FORTRAN was used to simulate the "baby" net since I am more familiar with FORTRAN and we have no LISP compiler on the VAX any way, plus they wanted a comparison to the LISP, C, and PROLOG programs that already existed for the XOR problem. LISP is used on a Texas Instruments Explorer machine to implement other paradigms or attempts to implement them. If you have other implementations of any type of paradigms or even some good research, it would be appreciated by us if information could be swapped (unclassified only in our case) or passed on. At present, there is research being performed on HOPFIELD, KOHONEN and NEOCOGNITRON and due to funding we are not able to expand or research these or other paradigms as we would like. All or any information that is provided would be greatly appreciated. Please pass this information on to groups that may share this interest as well. Files in this directory: [.KOHONEN]KOHONEN.ARC - Architecture of Kohonen network (in Harvard Graphics part of KOHONEN.WP) [.KOHONEN]KOHONEN.JPLJIID - Print file for HP LaserJet IID (postscript) [.KOHONEN]KOHONEN.LPS40 - Print file for LPS40 (postscript) [.KOHONEN]KOHONEN.WP - WordPerfect file on Kohonen paradigm [.BACKPROP]NET.CLD - Command language definition [.BACKPROP]NET.EXE - The executable form of the network [.BACKPROP]NET.FOR - Source code [.BACKPROP]NET.INC - Include file that defines the structure [.BACKPROP]NET.OBJ - Object file [.BACKPROP]SHUFFLE.CLD - Command language definition [.BACKPROP]SHUFFLE.EXE - Executable program to randomly place training data [.BACKPROP]SHUFFLE.FOR - Source code [.BACKPROP]SHUFFLE.OBJ - Object file [.BACKPROP]TRAIN2XOR.DAT - 2 input exclusive or training data [.BACKPROP]TRAIN3XOR.DAT - 3 input exclusive or training data [.BACKPROP]TRAIN4XOR.DAT - 4 input exclusive or training data [.BACKPROP]TRAIN5XOR.DAT - 5 input exclusive or training data Structure or network architecture for 2 to 10 inputs for xor: : input layer : hidden layer : output layer ---------------------------------------------------------------------------- 2 input xor: 2 : 2 : 1 3 input xor: 3 : 3 : 1 4 input xor: 4 : 4 : 1 5 input xor: 5 : 7 : 1 6 input xor: 6 : 11 : 1 7 input xor: 7 : 19 : 1 8 input xor: 8 : 33 : 1 9 input xor: 9 : 57 : 1 10 input xor: 10 : 103 : 1 Mathematical equation used to gain these figures for xor past 2 inputs was something like this: TRUNC (((2**in) / (in * out)) + 1) = Hidden Where: TRUNC is the truncation function. Hidden is the number of hidden nodes. in is the number of input nodes. out is the number of ouput nodes (1 in xor). 2**in is number of training cases Always pick the smallest greater number of possible paths (in relation to the hidden nodes excluding bias nodes) as there are training cases. For example: in the 2 input xor there are 4 training cases (0 0, 0 1, 1 0, 1 1). With 2 hidden nodes there are 4 paths to the output layer. input layer 0 0 |\ /| | X | |/ \| hidden layer 0 0 \ / output layer 0 (bias nodes not included in this figure) To change the size of possible networks, modify the parameters in NET.INC and recompile NET.FOR. For questions or for information regarding this neural networking reasearch, please contact: Jonathan C. Baker Mike Kutchinski NAVAL SURFACE WARFARE CENTER -- NAVAL SURFACE WARFARE CENTER Code N23 |OR| Code N35 Dahlgren, VA 22448 -- Dahlgren, VA 22448 703-663-8705 703-663-1674