WPC% 2B ZR)3|x!CSG Office DECLaser2250 (Postscript)DILN0SCR.PRSx  @hhhhD1?X@WPCORP$PTR51DIR:WPCORP_PTR51_DRS.DRS&$?(4UUUUUUUUUUUUU0 Z0 d0 . * %   : ^      ' G f { f O ; "  - F _ x   b C $   !!'!_3!==!F!N!T![!_!ga!Da!!c!a!_!]!V!lR!IJ!%B!7!-! !!}!\ =      l oS V< =! $ vW6xi]PFn=K5(/+&""u R",$ )-39|AXL5Vco~nN10Ibn}W>' ,Mly8p[hb\WUS1SWSzUZ^dl+uN}o5Tq4Kaz . E \ q  ! @ _ ~     $ H i ! ' , . 0 ?0 Znnniga]CUfLD8-2Sr v% _@ FY +t       b !C!!)!9!F!T!_!yi!Vq!2z!!!!!!\!9!!!!~!v!cm!@c!X!N!@!1!"!x![ <      g yL b3 I 4dB#{wsSm0f b`^^}^Zb5dhowa?|]>(!>UljU>+)Kj/Rv&Lo"Ei2Qp!4I_v8Sj/Lk% 3/>PHsPY_ei$kGnmn,,,,,,4,X,},,u,i,\,'N,I=,j-,, ,++ + +< +U +p i+ N+ 3+ + * * * !*!~*/!_*@!>*N!*]!)i!)s!)~!)!p)!M)!*)!)!(!(!(!u(!R(!.(! (!'!'!'!~'z!\'o!9'c!'V!&H!&7!&'!&!x&![& >& & & % %| %c %H %- p% [%F%4%!%%x%Y$8$$$$$$j$G$$$$$$$o$L$($$$$$x$T$3$$%%)%<%rQ%Ue%5z%%%%%%&/&jL&Ui&@&.&& &'('J'm'''''(?(d((((():)^))))) *-*O*p***#*6*K +_%+tB+[+v++++ +)+F+c,$,5,F,T,c,"o,Dy,g,,,,,,9,^,,=====(=K=n======>=_==o=\=G=2= =6 =Q ==[Q=zd=w=====9=Z=|==== =,=P=s==OUj~xr kc4[GRYHj>}3) '1>vJcUQ]@g-px~ydO:%qz\tKl6c&YND 8 - !     w k ^ R E ; t3 b( O < )         u ` K 6 "         m$ Z/ G7 5A "L X c q ~       z m c VL.AA9T1f){" 0EYn $#-65H=[HnR~]gs #- 9D,N?WR_bgwmv|,AO"""""""""%":"O{"bu"tm"b"Z"P"E"9"." """#!1!@!N![!g!t!!v!c!P!=!+!!!      v a L 8 #  n[I6#zpaUF8) {nbXMC:2*l"WD0  ydO: &&.6?IwSd^ThCu0  .>Q|dsxkc]VRLJ F D- AB AW Al A D F H J N T Z!a !g3!oF!x[!k!~!!!!!!!!! " "&"(3"9?"KL"\T"o^"i"q"y"""""""0"E"Z"n""333333,3@3S3h3}333}3u3j3`3U3K3#?3323D&3U3e 3t22222222s2`2P2;2(22 21111 1"1"q1$\1$H1$31"1  10000 00~0k0Y0H050#000////z/l/]/L/</+t/h/ ]/U/K/B/:/4/-/'/r!/]/K/6/!/ ///////{/h/S%/?)/,0/8/>/F/O/Y/c/n/z//s/b/T/C/5/&// / 00-0>0P0c0v000000001)1=1R1g1|1111111 2232E2X2k2}222222223,3;3K,3Z83kE3{O3\3f3p3y33333"343I3^3s33333"j%-4#r4#r7!7!4#4#j%"(%)Z#(^#(!.(!9(b#{'f#(%" :U%%;#i:#i:!9!9#8# :U%"P%k`#`#x!x!`#5`#P% bCW 2N\ vS"F @fKYE:CWbYE zz lw h] NZMDzM x/~ /nh W/[ /JF</@/~x<  s c RFF  m( (   ( ( m(  !(m( !N9 :!@  "1 : :"#9!N"!:9    ~ 2-' 7'> JrE' R'Ue-'2][Z'-'  M8  V8 ^8E8!zd8E8! 0)0) 0 0 000) 8*21 1+'* *1" 1*)1218*) 2a; q; 2| 3t; ;3na;2$3g;a;$$$%%6 d`$d`$d%b%`%\%X %P%K%G%A$;$3$:%:%%8%%1aM$aM$M$S$Y$]$d$j$n$t${$$ $$$%$-$4$<$@$F$M$Q$W$Y$[$[$]$[$[$Y$W$S$S$M$D%<%0 %' % % % %%$$$$$$%>%%>%$>%%%%% #&$#&$#&$#&$%&$)&$-&$3&$9&$B&}$N&y$a&v$m&v$v&y$~&{$&}$&$&$&$&$&$&$&$&$&$&$&$&$&$&$x&$]&$nx&$x&$&$&$&$&$&$&$&%&%&%&!%&%%&+%&1%|&6%m&8%[&8%R&6%L&4%D&1%>&-%5&'%1&%-&%+&%'&$6U%6U%g6U%6&m6&7$7$7J%7$ 8$TTXX{{  0f0f"1)0)0)&1f6 001 ! !  " ( , 3 9 = C I N T Z ` d k q u y!} !!!!!!!!!!!!! !!              1111111111112 222"2&2,232729292723202*2$222 21n 2 222$2*2.232 5252"32*.22*26&2=2C2G2I1I1G1E1C1?18101,1"1 qq(d6 6 zsomkgeL6 )))'$  1PP")/39?DJP VZa"g+k/o5s;x@~FHJJLJJHFBB;3+;;Zx;x;1GGGIKOQW`flu} ')/3:HLNU[_cinpppwplndi\cW]QWKHI<I/ EEEEGKOU\dp~~n  (,39=?}?u=n;f9`5W.S&O"MI;;x;Ox;1>>  #)-3'5)7/73::7H7L5N3U/[/_)c!in ppppnic]WH</6 ~~9730 *$ &&T&&& ' V'V'V'V'vX'n\'f`']g'Um'Ou'I'D'B'B'D'F'I'O'S'W'_'h'v''''''''''n''''''''''''''''''''w'q'i'e'`'^'Z'6 ';';'''''''''''777L&8b8L8G9[G9[G98l96 >9P>9P>9 <9 :96929)9%9!999 9,"KOHONEN NETWORKS !JPD ASSIGNMENT BY: "JONATHAN C. BAKER #SEPTEMBER 19910*0*0*     ,"KOHONEN NETWORKS Kohonen networks are a part of the family of neural networks that are selforganizing, which means that they do not follow a supervised learning principle. The designed use of this particular paradigm was for statistical purposes since the paradigm mimics the probability density function. Some of the uses of Kohonen networks are pattern recognition, statistical data analysis, control and knowledge processing. In the Kohonen architecture, each input node is connected to each node in the Kohonen layer with each connection having an independent weight (fig 1). The Kohonen network also has no bias nodes in its architecture. At the initialization of this network, !2'x?(ddkohonen.arc<<) ? Ԏ Kohonen Architecture.$(#(#P(#(#!'#$00*(('#,!0Ԓthe weight values are all distributed evenly. It is suggested that all weight values be equal to: *!xddddduddaxjt ? `R  ALIGNC w_i^n ~=~ 1 over {sqrt N}x6X@KX@x6X@KX@x6X@KX@!w=ni8NbP mO?1ߪڃ $(#(#(#(#X"!'#$where i is a specific Kohonen node, n is related to the input node and N is the number of Kohonen nodes. When the Kohonen network is active, the network forward feeds, and all Kohonen nodes compete to determine which is closest to the input vector. This is determined by the Euclidean distance formula: *XA#xddddd1 *ddxjt ? `\ 0ALIGNC d(x_n,w_i^n) ~ = ~~ |x_n ~~ w_i^n| ~~.x6X@KX@x6X@KX@x6X@KX@_d_x+nG_wn:i_x+n_w.n:i_(_,0_)(_|~_| _.__Xڃ $(#(#(#(#$A'#$The variable x represents the input vector with n representing a specific node of the input vector. When the closest node, also known as the least valued distance by equation (2), is found, its value is set at 1, while all other nodes are set to 0. This ensures that only one node is marked as a "winner." Weights are modified when the network is in a learning stage. The network continues to forward feed, however, the "winning" node will be the only node to modify its weights. Equation (3) is used for this weight modification: *ax%ddddd Fddxjt ? `EALIGNC w_i^n_new ~ = ~ w_i^n_old ~+~ alpha (x_n ~~ w_i^n_old)z_i ~~.x6X@KX@x6X@KX@x6X@KX@_wnddnew:i_wunddoldY:i_x+n_w0 ndd old :i _z. +i ____*_(9 _). _.ߵڃ  ?`" $(#(#`"(#(#$a'#$Since zi will be equal to 0 for "loser" nodes, equation (3) will still apply for all weight modifications. Alternatively the following equations may also be used in place of equation (3): *X xddddd Fddx ? `e 9ALIGNC w_i^n_new ~=~ (1 ~~ alpha)w_i^n_old ~+~ alpha x_nx6X@KX@x6X@KX@x6X@KX@_wnddnew:i_wnddoldm:i> _x +n _$___(T_1x_)__Xڃ`'0*((A'# !'#mA%'#'a'#`Ԍ$(#(#(#(#X!'#$for the winning node, and: *#xdddddFddaxjt ? ` , ALIGNC w_i^n_new ~=~ w_i^n_oldx6X@KX@x6X@KX@x6X@KX@_wnddnew:i_wunddoldY:i _ߩڃ $(#(#x(#(#$'#$for the losing nodes. The variable  is a real constant from (0,1] and therefore moves the winning node a fraction of  from the old weight vector towards the x (input) vector. Typically,  starts at .8, and as the weight vectors approach the input vector,  is lowered to .1 or less for final equilibration. Normal Kohonen learning allows one vector to be learned by the network by allowing the winning node to continually win (one Kohonen node wins first, then its weights are moved closer to the input vector thereby making it the 'eternal' winner). To create better competition between the nodes, a bias factor is introduced to allow the system to learn over an entire range. This bias term represents the amount by which each node's frequency of winning is above or below the norm. Nodes that win often will have large negative bias values while those nodes that do not will have large positive bias values. Each individual node will have its own bias  ?x factor, bi. The distance formula is then modified to: *x$ddddd *ddjxt ? `%3ALIGNC d(x_n,w_i^n) ~=~ |x_n ~~ w_i^n| ~~ b_i ~~.x6X@KX@x6X@KX@x6X@KX@_d_x+nG_wn:iH_x+n=_wn:i _bC +i_(_,0_)_|&_|C _._m__ߌڃ $(#(#!(#(#$'#$The bias factor is generated using equation (7).  is a positive constant (10.0 would be one example) while N represents the number of Kohonen nodes. * xXdddddhddaxXc ? `X _-ALIGNC b_i ~=~ Gamma ({1 over N} ~~ f_i) ~~.x6X@KX@x6X@KX@x6X@KX@bi+8Nfei7x(+1). _ڃ`'0*((A'#'# $'# ''#k`Ԍ ? $(#(#(#(# !'#$The variable fi is a fractional value that is also attached to each Kohonen node and used to determine the bias variable. The fractional can be calculated by equation (8): *Yx ddddd *ddjxt ? `<9ALIGNC f_i^new ~=~ f_i^old ~~ beta (z_i ~~ f_i^old) ~~.x6X@KX@x6X@KX@x6X@KX@_fnew:i_f\old@:ip_z+ie_fold:i___t__( _) _.Yڃ $(#(#` (#(#$'#$In this equation  is equated to a small positive value (.0001  ? would be one example). One should also remember that zi will retain its current value of either a 0 or a 1.