人工神经网络ANN.ppt

上传人:牧羊曲112 文档编号:5194564 上传时间:2023-06-13 格式:PPT 页数:69 大小:849KB
返回 下载 相关 举报
人工神经网络ANN.ppt_第1页
第1页 / 共69页
人工神经网络ANN.ppt_第2页
第2页 / 共69页
人工神经网络ANN.ppt_第3页
第3页 / 共69页
人工神经网络ANN.ppt_第4页
第4页 / 共69页
人工神经网络ANN.ppt_第5页
第5页 / 共69页
点击查看更多>>
资源描述

《人工神经网络ANN.ppt》由会员分享,可在线阅读,更多相关《人工神经网络ANN.ppt(69页珍藏版)》请在三一办公上搜索。

1、Artificial Neural Networks人工神经网络,Introduction,13/06/2023,Artificial Neural Networks-I,2,Table of Contents,Introduction to ANNsTaxonomyFeaturesLearningApplications,I,13/06/2023,Artificial Neural Networks-I,3,Contents-I,Introduction to ANNsProcessing elements(neurons)ArchitectureFunctional Taxonomy of

2、 ANNsStructural Taxonomy of ANNsFeaturesLearning ParadigmsApplications,13/06/2023,Artificial Neural Networks-I,4,The Biological Neuron,10 billion neurons in human brainSummation of input stimuliSpatial(signals)Temporal(pulses)Threshold over composed inputsConstant firing strength,billion synapses in

3、 human brainChemical transmission and modulation of signalsInhibitory synapsesExcitatory synapses,13/06/2023,Artificial Neural Networks-I,5,Biological Neural Networks,10,000 synapses per neuronComputational power=connectivityPlasticity new connections(?)strength of connections modified,13/06/2023,Ar

4、tificial Neural Networks-I,6,Neural Dynamics,Refractory time,Action potential,Action potential 100mVActivation threshold 20-30mVRest potential-65mVSpike time 1-2msRefractory time 10-20ms,13/06/2023,Artificial Neural Networks-I,7,神经网络的复杂性,神经网路的复杂多样,不仅在于神经元和突触的数量大、组合方式复杂和联系广泛,还在于突触传递的机制复杂。现在已经发现和阐明的突触

5、传递机制有:突触后兴奋,突触后抑制,突触前抑制,突触前兴奋,以及“远程”抑制等等。在突触传递机制中,释放神经递质是实现突触传递机能的中心环节,而不同的神经递质有着不同的作用性质和特点,13/06/2023,Artificial Neural Networks-I,8,神经网络的研究,神经系统活动,不论是感觉、运动,还是脑的高级功能(如学习、记忆、情绪等)都有整体上的表现,面对这种表现的神经基础和机理的分析不可避免地会涉及各种层次。这些不同层次的研究互相启示,互相推动。在低层次(细胞、分子水平)上的工作为较高层次的观察提供分析的基础,而较高层次的观察又有助于引导低层次工作的方向和体现其功能意义。

6、既有物理的、化学的、生理的、心理的分门别类研究,又有综合研究。,13/06/2023,Artificial Neural Networks-I,9,The Artificial Neuron,Stimulus,urest=resting potentialxj(t)=output of neuron j at time twij=connection strength between neuron i and neuron ju(t)=total stimulus at time t,yi(t),x1(t),x2(t),x5(t),x3(t),x4(t),wi1,wi3,wi2,wi4,wi5

7、,Neuron i,Response,13/06/2023,Artificial Neural Networks-I,10,Artificial Neural Models,McCulloch Pitts-type Neurons(static)Digital neurons:activation state interpretation(snapshot of the system each time a unit fires)Analog neurons:firing rate interpretation(activation of units equal to firing rate)

8、Activation of neurons encodes informationSpiking Neurons(dynamic)Firing pattern interpretation(spike trains of units)Timing of spike trains encodes information(time to first spike,phase of signal,correlation and synchronicity,13/06/2023,Artificial Neural Networks-I,11,Binary Neurons,“Hard”threshold,

9、=threshold,ex:Perceptrons,Hopfield NNs,Boltzmann MachinesMain drawbacks:can only map binary functions,biologically implausible.,off,on,Stimulus,Response,13/06/2023,Artificial Neural Networks-I,12,Analog Neurons,“Soft”threshold,ex:MLPs,Recurrent NNs,RBF NNs.Main drawbacks:difficult to process time pa

10、tterns,biologically implausible.,off,on,Stimulus,Response,13/06/2023,Artificial Neural Networks-I,13,Spiking Neurons,=spike and afterspike potentialurest=resting potentiale(t,u(t)=trace at time t of input at time t=thresholdxj(t)=output of neuron j at time twij=efficacy of synapse from neuron i to n

11、euron ju(t)=input stimulus at time t,Response,Stimulus,13/06/2023,Artificial Neural Networks-I,14,Spiking Neuron Dynamics,13/06/2023,Artificial Neural Networks-I,15,赫布律,加拿大心理学家Donald Hebb出版了行为的组织一书,指出学习导致突触的联系强度和传递效能的提高,即为“赫布律”。在此基础上,人们提出了各种学习规则和算法,以适应不同网络模型的需要。有效的学习算法,使得神经网络能够通过连接权值的调整,构造客观世界的内在表示,

12、形成具有特色的信息处理方法,信息存储和处理体现在网络的连接中。,13/06/2023,Artificial Neural Networks-I,16,Hebbs Postulate of Learning,Biological formulation When an axon of cell A is near enough to excite a cell and repeatedly or persistently takes part in firing it,some growth process or metabolic change takes place in one or bo

13、th cells such that As efficiency as one of the cells firing B is increased.,13/06/2023,Artificial Neural Networks-I,17,赫布律,当细胞A的一个轴突和细胞B 很近,足以对它产生影响,并且持久地、不断地参与了对细胞B 的兴奋,那么在这两个细胞或其中之一会发生某种生长过程或新陈代谢变化,以致于A作为能使B 兴奋的细胞之一,它的影响加强了。,13/06/2023,Artificial Neural Networks-I,18,Hebbs Postulate:revisited,Sten

14、t(1973),and Changeux and Danchin(1976)have expanded Hebbs rule such that it also mo-dels inhibitory synapses:If two neurons on either side of a synapse are activated simultaneously(synchronously),then the strength of that synapse is selectively increased.If two neurons on either side of a synapse ar

15、e activated asynchronously,then that synapse is selectively weakened or eliminated.,13/06/2023,Artificial Neural Networks-I,19,Artificial Neural Networks,Output layer,Input layer,Hidden layers,fully connected,sparsely connected,13/06/2023,Artificial Neural Networks-I,20,Feedforward ANN Architectures

16、,Information flow unidirectionalStatic mapping:y=f(x)Multi-Layer Perceptron(MLP)Radial Basis Function(RBF)Kohonen Self-Organising Map(SOM),13/06/2023,Artificial Neural Networks-I,21,Recurrent ANN Architectures,Feedback connectionsDynamic memory:y(t+1)=f(x(),y(),s()(t,t-1,.)Jordan/Elman ANNsHopfield

17、Adaptive Resonance Theory(ART),13/06/2023,Artificial Neural Networks-I,22,History,Early stages1943 McCulloch-Pitts:neuron as comp.elem.1948 Wiener:cybernatics1949 Hebb:learning rule1958 Rosenblatt:perceptron1960 Widrow-Hoff:least mean square algorithmRecession1969 Minsky-Papert:limitations perceptro

18、n modelRevival1982 Hopfield:recurrent network model1982 Kohonen:self-organizing maps1986 Rumelhart et.al.:backpropagation,13/06/2023,Artificial Neural Networks-I,23,历史,40年代心理学家Mcculloch和数学家Pitts合作提出的兴奋与抑制型神经元模型和Hebb提出的神经元连接强度的修改规则,他们的研究结果至今仍是许多神经网络模型研究的基础。50年代、60年代的代表性工作是Rosenblatt的感知机和Widrow的自适应性元件

19、Adaline。1969年,Minsky和Papert合作发表了颇有影响的Perceptron一书,得出了消极悲观的论点,加上数字计算机正处于全盛时期并在人工智能领域取得显著成就,70年代人工神经网络的研究处于低潮。,13/06/2023,Artificial Neural Networks-I,24,历史,80年代后,传统的Von Neumann数字计算机在模拟视听觉的人工智能方面遇到了物理上不可逾越的极限。与此同时,Rumelhart与Mcclelland以及Hopfield等人在神经网络领域取得了突破性进展,神经网络的热潮再次掀起。自适应共振理论(ART)组织特征映射理论Hinton 等

20、人最近提出了 Helmboltz 机 徐雷提出的 Ying-Yang 机理论模型 甘利俊一(S.Amari)开创和发展的基于统计流形的方法应用于人工神经网络的研究,13/06/2023,Artificial Neural Networks-I,25,ANN Capabilities,LearningApproximate reasoningGeneralisation capabilityNoise filteringParallel processingDistributed knowledge baseFault tolerance,13/06/2023,Artificial Neural

21、 Networks-I,26,Main Problems with ANN,Knowledge base not transparent(black box)(Partially resolved)Learning sometimes difficult/slowLimited storage capability,13/06/2023,Artificial Neural Networks-I,27,ANN Learning Paradigms,Supervised learningClassificationControlFunction approximationAssociative m

22、emoryUnsupervised learningClusteringReinforcement learningControl,13/06/2023,Artificial Neural Networks-I,28,Supervised Learning,Teacher presents ANN input-output pairsANN weights adjusted according to errorIterative algorithms(e.g.Delta rule,BP rule)One-shot learning(Hopfield)Quality of training ex

23、amples is critical,13/06/2023,Artificial Neural Networks-I,29,Presented by Martin Ho,Eddy Li,Eric Wong and Kitty Wong-Copyright 2000,Linear Separability in Perceptrons,13/06/2023,Artificial Neural Networks-I,30,Presented by Martin Ho,Eddy Li,Eric Wong and Kitty Wong-Copyright 2000,Learning Linearly

24、Separable Functions(1),What can these functions learn?Bad news:-There are not many linearly separable functions.Good news:-There is a perceptron algorithm that will learn any linearly separable function,given enough training examples.,13/06/2023,Artificial Neural Networks-I,31,Delta Rule,a.k.a.Least

25、 Mean SquaresWidrow-Hoff iterative delta rule(1960)Gradient descent of the error surfaceGuaranteed to find minimum error configuration in single layer ANNsStochastic approximation of desired behaviour,13/06/2023,Artificial Neural Networks-I,32,Unsupervised Learning,ANN adapts weights to cluster inpu

26、t dataHebbian learningConnection stimulus-response strengthened(hebbian)Competitive learning algorithms Kohonen&ARTInput weights adjusted to resemble stimulus,13/06/2023,Artificial Neural Networks-I,33,Hebbian Learning,Hebb postulate(1948)Correlation-based learningConnections between concurrently fi

27、ring neurons are strengthenedExperimentally verified(1973),l=learning coefficientwij=connection from neuron xj to yi,General Formulation,Hebb postulate,Kohonen&Grossberg(ART),13/06/2023,Artificial Neural Networks-I,34,Learning principle for artificial neural networks,ENERGY MINIMIZATIONWe need an ap

28、propriate definition of energy for artificial neural networks,and having that we can use mathematical optimisation techniques to find how to change the weights of the synaptic connections between neurons.ENERGY=measure of task performance error,13/06/2023,Artificial Neural Networks-I,35,Neural netwo

29、rk mathematics,Inputs,Output,13/06/2023,Artificial Neural Networks-I,36,Neural network mathematics,Neural network:input/output transformation,W is the matrix of all weight vectors.,13/06/2023,Artificial Neural Networks-I,37,MLP neural networks,MLP=multi-layer perceptronPerceptron:MLP neural network:

30、,13/06/2023,Artificial Neural Networks-I,38,RBF neural networks,RBF=radial basis function,Example:,Gaussian RBF,x,yout,13/06/2023,Artificial Neural Networks-I,39,Neural network tasks,control classification prediction approximation,These can be reformulated in general as FUNCTION APPROXIMATION tasks.

31、,Approximation:given a set of values of a function g(x)build a neural network that approximates the g(x)values for any input x.,13/06/2023,Artificial Neural Networks-I,40,Neural network approximation,Task specification:Data:set of value pairs:(xt,yt),yt=g(xt)+zt;zt is random measurement noise.Object

32、ive:find a neural network that represents the input/output transformation(a function)F(x,W)such thatF(x,W)approximates g(x)for every x,13/06/2023,Artificial Neural Networks-I,41,Learning to approximate,c is the learning parameter(usually a constant),13/06/2023,Artificial Neural Networks-I,42,Learnin

33、g with a perceptron,Perceptron:,Data:,Error:,Learning:,A perceptron is able to learn a linear function.,13/06/2023,Artificial Neural Networks-I,43,Learning with RBF neural networks,Only the synaptic weights of the output neuron are modified.An RBF neural network learns a nonlinear function.,13/06/20

34、23,Artificial Neural Networks-I,44,Learning with MLP neural networks,MLP neural network:with p layers,Data:,Error:,x,yout,1 2 p-1 p,13/06/2023,Artificial Neural Networks-I,45,Learning with backpropagation,Learning:Apply the chain rule for differentiation:calculate first the changes for the synaptic

35、weights of the output neuron;calculate the changes backward starting from layer p-1,and propagate backward the local error terms.,The method is still relatively complicated but it is much simpler than the original optimisation problem.,13/06/2023,Artificial Neural Networks-I,46,Learning with general

36、 optimization,In general it is enough to have a single layer of nonlinear neurons in a neural network in order to learn to approximate a nonlinear function.In such case general optimisation may be applied without too much difficulty.,Example:an MLP neural network with a single hidden layer:,13/06/20

37、23,Artificial Neural Networks-I,47,Learning with general optimization,13/06/2023,Artificial Neural Networks-I,48,New methods for learning with neural networks,Bayesian learning:the distribution of the neural network parameters is learntSupport vector learning:the minimal representative subset of the

38、 available data is used to calculate the synaptic weights of the neurons,13/06/2023,Artificial Neural Networks-I,49,Reinforcement Learning,Sequential tasksDesired action may not be knownCritic evaluation of ANN behaviourWeights adjusted according to criticMay require credit assignmentPopulation-base

39、d learningEvolutionary AlgorithmsSwarming TechniquesImmune Networks,13/06/2023,Artificial Neural Networks-I,50,ANN Summary,13/06/2023,Artificial Neural Networks-I,51,神经网络的集成,1996年,Sollich和Krogh 将神经网络集成定义为:“神经网络集成是用有限个神经网络对同一个问题进行学习,集成在某输入示例下的输出由构成集成的各神经网络在该示例下的输出共同决定”。,13/06/2023,Artificial Neural N

40、etworks-I,52,ANN Application Areas,ClassificationClusteringAssociative memory Control Function approximation,13/06/2023,Artificial Neural Networks-I,53,ANN Classifier systems,Learning capability Statistical classifier systemsData drivenGeneralisation capabilityHandle and filter large input dataRecon

41、struct noisy and incomplete patternsClassification rules not transparent,13/06/2023,Artificial Neural Networks-I,54,Applications for ANN Classifiers,Pattern recognitionIndustrial inspectionFault diagnosisImage recognitionTarget recognitionSpeech recognitionNatural language processingCharacter recogn

42、itionHandwriting recognitionAutomatic text-to-speech conversion,13/06/2023,Artificial Neural Networks-I,55,Clustering with ANNs,Fast parallel distributed processingHandle large input informationRobust to noise and incomplete patternsData drivenPlasticity/AdaptationVisualisation of resultsAccuracy so

43、metimes poor,13/06/2023,Artificial Neural Networks-I,56,ANN Clustering Applications,Natural language processingDocument clusteringDocument retrievalAutomatic queryImage segmentationData miningData set partitioningDetection of emerging clustersFuzzy partitioningCondition-action association,13/06/2023

44、,Artificial Neural Networks-I,57,Associative ANN Memories,Stimulus-response associationAuto-associative memoryContent addressable memoryFast parallel distributed processingRobust to noise and incomplete patternsLimited storage capability,13/06/2023,Artificial Neural Networks-I,58,Application of ANN

45、Associative Memories,Character recognitionHandwriting recognitionNoise filteringData compressionInformation retrieval,13/06/2023,Artificial Neural Networks-I,59,ANN Control Systems,Learning/adaptation capability Data drivenNon-linear mappingFast responseFault toleranceGeneralisation capabilityHandle

46、 and filter large input dataReconstruct noisy and incomplete patternsControl rules not transparentLearning may be problematic,13/06/2023,Artificial Neural Networks-I,60,ANN Control Schemes,ANN controllerconventional controller+ANN for unknown or non-linear dynamicsIndirect control schemesANN models

47、direct plant dynamicsANN models inverse plant dynamics,13/06/2023,Artificial Neural Networks-I,61,ANN Control Applications,Non-linear process controlChemical reaction controlIndustrial process controlWater treatmentIntensive care of patientsServo controlRobot manipulatorsAutonomous vehiclesAutomotiv

48、e controlDynamic system controlHelicopter flight controlUnderwater robot control,13/06/2023,Artificial Neural Networks-I,62,ANN Function Modelling,ANN as universal function approximatorDynamic system modellingLearning capability Data drivenNon-linear mappingGeneralisation capabilityHandle and filter

49、 large input dataReconstruct noisy and incomplete inputs,13/06/2023,Artificial Neural Networks-I,63,ANN Modelling Applications,Modelling of highly nonlinear industrial processesFinancial market predictionWeather forecastsRiver flow predictionFault/breakage predictionMonitoring of critically ill pati

50、ents,13/06/2023,Artificial Neural Networks-I,64,Presented by Martin Ho,Eddy Li,Eric Wong and Kitty Wong-Copyright 2000,Neural Network Approaches,ALVINN-Autonomous Land Vehicle In a Neural Network,13/06/2023,Artificial Neural Networks-I,65,Presented by Martin Ho,Eddy Li,Eric Wong and Kitty Wong-Copyr

展开阅读全文
相关资源
猜你喜欢
相关搜索
资源标签

当前位置:首页 > 生活休闲 > 在线阅读


备案号:宁ICP备20000045号-2

经营许可证:宁B2-20210002

宁公网安备 64010402000987号