CCD双目立体视觉测量系统的理论研究外文翻译.doc

上传人:仙人指路1688 文档编号:3931465 上传时间:2023-03-28 格式:DOC 页数:19 大小:940.50KB
返回 下载 相关 举报
CCD双目立体视觉测量系统的理论研究外文翻译.doc_第1页
第1页 / 共19页
CCD双目立体视觉测量系统的理论研究外文翻译.doc_第2页
第2页 / 共19页
CCD双目立体视觉测量系统的理论研究外文翻译.doc_第3页
第3页 / 共19页
CCD双目立体视觉测量系统的理论研究外文翻译.doc_第4页
第4页 / 共19页
CCD双目立体视觉测量系统的理论研究外文翻译.doc_第5页
第5页 / 共19页
点击查看更多>>
资源描述

《CCD双目立体视觉测量系统的理论研究外文翻译.doc》由会员分享,可在线阅读,更多相关《CCD双目立体视觉测量系统的理论研究外文翻译.doc(19页珍藏版)》请在三一办公上搜索。

1、毕业设计外文资料翻译题 目: CCD stereo vision measurement system theory 院系名称:电气工程学院 专业班级: 自动化 学生姓名: 学 号: 指导教师: 教师职称: 起止日期:2011-2-262011-3-14 地点: 附 件: 1.外文资料翻译译文;2.外文原文。 指导教师评语: 签名: 年 月 日附件1:外文资料翻译译文CCD双目立体视觉测量系统的理论研究摘要: 利用几何成像原理建立起 CCD 双目立体视觉测量系统的数学模型 ,从提高系统测量精度出发 ,在理论上重点对系统结构参数、 图像识别误差与系统测量精度的关系进行了深入的分析和探讨 ,并通过

2、实验对结论进行了验证。研究内容对实际建立该测量系统具有很强的指导作用。关键词: 立体视觉; CCD ; 测量精度; 图像识别; 系统测量引言双目立体视觉测量技术是计算机视觉中的一个重要分支,一直是计算机视觉研究的重点和热点之一。由于其近似于人眼视觉系统 ,具有较高的测量精度和速度 ,并具有结构简单 ,便于使用等优点 ,所以被广泛应用于工业检测、 物体识别、 工件定位、 机器人自导引等诸多领域。近年来许多学者对此进行了大量的研究工作1 - 4 。其中大量的工作集中在对视觉测量系统的数学模型、 系统的定标方法5 - 7 以及目标特征点匹配算法8 - 9 的研究上 ,而对系统的结构参数(两个 CCD

3、之间的距离、 光轴夹角等)研究得却很少。文献10 对立体视觉结构参数进行了相应的理论研究 ,但它是从观看物体时的深度感出发研究CCD与物体之间的距离、 两个CCD间距和观看距离3个参数之间的关系 ,没有涉及到结构参数对系统测量精度的影响。而实践证明系统的结构参数设置在实际应用中对于系统的测量精度是至关重要的。此外 ,从立体视觉测量原理中 ,可以看出图像识别误差是另一个对系统测量精度产生直接影响的重要因素。综合以上考虑 ,从理论上对系统的结构参数设置和图像识别误差对系统测量精度的影响进行了深入的分析和研究。结合系统结构参数对摄像机定标精度的影响 ,给出了实际应用中组建双目立体视觉测量系统的设计方

4、案。1 双目立体视觉测量原理及数学模型1. 1 摄像机成像模型摄像机的成像模型,是光学成像系统几何关系的数学表示。目前在摄像机标定中应用的摄像机成像模型主要有针孔成像模型、 双平面模型和人工神经网络模型等。其中针孔成像模型是目前大量采用的一种成像模型,它反映的是一种理想的线性映射关系,如图1 所示。其中, Oc为摄像机的光心, Oc XcYc Zc为摄像机坐标系, OXYZ 为世界坐标系, oxy 为摄像机成像平面物理坐标系。P( X , Y , Z) 为空间一物点, p ( x , y)是其在图像平面上的投影点。根据该成像模型,空间物点 P( X , Y , Z)与像点之间的关系可表示为当两

5、个或两个以上摄像机进行交会时,可以得到2 i ( i 2)个方程所组成的超定方程组,因此可以用最小二乘法对方程组求解以确定空间物点的坐标。1. 2 双目立体视觉测量系统的数学模型双目立体视觉系统通常由两台结构和性能完全相同的CCD组成 ,并且两个CCD摆放位置对称。基于上述摄像机成像模型 ,由式(2)可以推出如下超定方程组:其中 r , t , r , t 分别为两摄像机坐标系相对于世界坐标系的平移和旋转矩阵,可以通过摄像机标定得到。( x , y) , ( x , y )分别为空间物点( X , Y , Z)在两CCD图像平面的投影的物理坐标。图2 双目立体视觉简易模型为了对系统结构参数进行

6、分析,这里构建如图2所示的立体视觉系统。在该结构中,三坐标系处于同一平面内,其中 Y 轴垂直纸面向里,世界坐标系 OXYZ 与左摄像机坐标系Oc Xc Yc Zc原点重合,两摄像机间距为 L ,两光轴夹角为 2。根据上述结构,可以确定两摄像机组成的超定方程组为则由式(4)可得: 2 图像识别误差对系统测量精度的影响立体视觉系统中 ,空间物点在两个摄像机图像平面上的位置是通过像素坐标来表示的 ,而面阵CCD摄像机像素具有一定的物理尺寸 ,这就使得空间物点在图像上的真实物理坐标无法得到准确的表达 ,从根本上造成了图像识别误差。如图3所示。图像平面上像素坐标系与物理坐标系有如下关系:其中: ( u

7、, v)是以像素为单位的图像坐标系的坐标,dx , dy 分别为每一个像素在x 轴和 y 轴方向上的物理尺寸, ( u0 , v0)为 oxy 坐标系原点o在o0 uv 坐标系中的坐标3 。如图 4 所示,假设图像识别精度达到0. 5个像素级。对于一空间物点,设其投影到图像平面上的第 i 行,第 j 列的像素中,则此时该点的物理坐标为 x = ( i - u0) dxy = ( j - v0) dy即只要该点落在该像素内,其坐标值是一个定值。而理想情况下的坐标应分别在一定的范围内:( i - u0 - 0. 5) dx x ( i - u0 + 0. 5) dx( j - v0 - 0. 5)

8、 dy y ( i - v0 + 0. 5) dy综合上式,则对应该点的图像识别误差为 ex = 0. 5 - ( x - x )ey = 0. 5 - ( y - y )其中: x , y 分别表示对 x , y 取整。由于图像识别误差的存在,则实际像点坐标与理想像点坐标有如下关系: x = x + ex , y = y + ey , ( x , y)为实际像点物理坐标, ( x , y )为理想像点物理坐标。设被测物点坐标 P = ( X , Y , Z) ,则由公式(1)计算出该点在两摄像机投影的理想像点坐标( x , y)和( x , y ) ,考虑图像识别误差,根据式(8)得出实际像

9、点坐标( x ,y) , ( x , y ) ,将其代入(4)式可得被测点的空间坐标( X , Y , Z) ,则被测物点的测量误差可表示为eX = X - X , eY = Y - Y , eZ = Z - Z (9)3 系统结构参数分析及实验结果3. 1 结构参数与系统测量精度的关系根据上述系统模型及数学推导过程,得出了系统结构参数与系统测量误差之间的关系图。图5为特定物点的测量误差与两摄像机夹角2之间的关系图。从图中可以看出, eX 变化波动不大, eY 随 2的增大呈缓慢的上升趋势, eZ 则变化比较剧烈,随2的增大大幅提升。图5 特定空间点测量误差与两轴夹角的关系图图6为一定范围内的

10、物点与世界坐标系原点的平均测量误差与两摄像机夹角 2和两摄像机间距离L 之间的关系图。可以看出,当两摄像机间距离一定时,平均误差随 2的增大而增大,当两摄像机夹角一定时,平均误差随其距离的增加而不断增大。图6 平均误差与两轴夹角、 两摄像机距离的关系图总体上讲,结构参数与系统测量精度是一个较为复杂的函数关系,可以总结如下:1) 摄像机之间的间距大小与系统测量误差成正比关系,间距越小,误差越小;2) 当摄像机之间的夹角不大于130 时,测量误差较小,反之较大。有极高的可信性。3. 2 结构参数对摄像机定标精度的影响在实际应用中,需要建立两个 CCD 坐标系之间的联系 ,这就需要对两 CCD 进行

11、立体定标 ,以求取两 CCD之间的旋转矩阵和平移矩阵。事实证明摄像机定标精度与系统结构参数设置同样有着非常紧密的关系。定标方法采用的是基于单平面模板定标策略 ,精度评估采用基于棋盘格长度的评估方法。该方法亦可以作为系统测量误差的测量方法。对定标模板上的10个长为 50 mm的棋盘格进行了反复实验 ,实验结果见表1。表1 结构参数对系统定标平均误差的影响从表1中可以看出 ,当摄像机距离一定时 ,定标误差随光轴夹角的增加而不断增加 ,当光轴夹角固定不变时 ,定标误差随摄像机距离的增加而不断增加。大量实验证明 ,当两摄像机间距离不超过 500mm ,两光轴夹角不超过60 时 ,定标误差较小。4 结束

12、语综合系统结构参数对测量精度及定标精度的影响 ,在建立立体视觉系统时 ,两摄像机光轴夹角和两摄像机间距应尽可能小 ,但在实际应用中 ,考虑到便于目标特征点视觉匹配 ,尤其是对运动目标进行大范围实时跟踪测量 ,有目标被遮挡的情况发生时 ,两摄像机光轴夹角应选择在 30 60 之间。此外 ,目标特征点图像识别精度应尽可能达到亚像素精度 ,尽量避免 “失之毫厘 ,差之千里” 的现象。根据以上推导搭建了立体视觉系统,取得了较好的实验结果,证明本文的结论对实践具有较大的指导意义,为进一步开展深入研究打下了坚实的基础。附件2:外文原文(复印件)CCD stereo vision measurement s

13、ystem theoryAbstract: Using the principles of geometrical imaging CCD stereo vision to build a mathematical model of measurement systems, improve the precision of the system from the start, in theory, focus on system parameters, image recognition errors and the relationship between the precision of

14、the system has in-depth analysis and Explore and experiment on the conclusion was verified. Research on the actual establishment of this system has a strong guiding role.Key words: stereo vision; CCD; measurement accuracy; image recognition; system measurementsIntroduction Stereoscopic measurement i

15、n computer vision technology is an important branch of computer vision has been the focus of the study and the hot spots. Due to its similar to the human visual system with high precision and speed, and has a simple structure, easy to use, etc., they were widely used in industrial inspection, object

16、 recognition, workpiece positioning, robot homing, and many other fields . In recent years many scholars have done a lot of research work 1 - 4. A lot of work which focused on the mathematical model of vision measurement system, the system calibration method 5 - 7 and the target feature point matchi

17、ng algorithm 8 - 9, while the structural parameters of the system (of two CCD distance between the optical axis angle, etc.) are rarely studied. 10 structure parameters of stereoscopic vision corresponding theoretical research, but it is viewing objects from the start of the depth of feeling between

18、 the CCD and the object distance and viewing distance of the two CCD spacing between 3 parameters , did not address the structural parameters of the system measurement accuracy. The practice proved that the structure of the system parameters in practical applications of the systems measurement accur

19、acy is essential. In addition, from the stereo vision measurement principle, we can see the error image recognition system accuracy is another important factor in a direct impact. Based on the above considerations, the structure from the theoretical parameters of the system and image recognition err

20、ors on the precision of the system of in-depth analysis and research. Combination of system parameters on the accuracy of the camera calibration given set of practical applications, stereo vision measurement system design1 Binocular stereo vision measurement principle and mathematical model1.1 Camer

21、a imaging modelCamera imaging model, the geometric relationship between the optical imaging system of mathematics said. At present the application of camera calibration are pinhole camera imaging model imaging model, two-plane model and artificial neural network model. In which a large number of pin

22、hole imaging model is used in an imaging model, which reflects the relationship between an ideal linear map shown in Figure 1. One, Oc for the camera optical center, Oc XcYc Zc for the camera coordinate system, OXYZ the world coordinate system, oxy for the camera imaging plane physical coordinates.P

23、 (X, Y, Z) for the space of a material point, p (x, y) is its projection point on the image plane. According to the imaging model, the space object point P (X, Y, Z) and the relationship between the image point can be expressed asWhen two or more intersection cameras, you can get 2 i (i 2) consistin

24、g of equations overdetermined equations, so the least squares method can be used to solve equations to determine the spatial coordinates of object points.1.2 stereo vision measurement system modelBinocular stereo vision system usually consists of two identical CCD structure and properties of the com

25、position and placement of the two CCD symmetry. Based on the above camera imaging model, from (2) can introduce the following overdeterminedEquations:Where r, t, r , t are the two camera coordinate system relative to the world coordinate system translation and rotation matrix, can be obtained throug

26、h camera calibration. (X, y), (x , y), respectively, for the space object point (X, Y, Z) in the two CCD image plane projection of the physical coordinates.Figure 2 Simple model of binocular stereo visionIn order to analyze structural parameters of the system, where the building shown in Figure 2 st

27、ereo vision system. In this structure, the coordinate system in the same plane, which for the Y axis perpendicular to the paper, the world coordinate system OXYZ with the left camera coordinate origin coincides Oc Xc Yc Zc, two camera spacing L, the angle between the two axis 2. According to the abo

28、ve structure, the camera can determine the composition of the two equations for the overdeterminedBy equation (4) yields:2 Image recognition errors on the accuracy of the systemStereo vision system, the space object point in two camera image plane position is represented by pixel coordinates, and pi

29、xel area array CCD camera has a certain physical size, which makes the space object point in the image on the real physical coordinates Not the exact expression, a fundamental cause of the image recognition errors.Figure 3.Image plane pixel coordinates and physical coordinates the following relation

30、s: Where: (u, v) the image in pixels coordinate system, dx, dy, respectively, for each pixel in the x-axis and y axis on the physical size, (u0, v0) o the origin of the coordinate system for the oxy O0 uv coordinate system in the coordinates 3. Shown in Figure 4, assuming that the image recognition

31、accuracy of 0.5 pixel. For a space object point, set the projection to the image plane on the i-line, the first j columns of pixels, then this time the physical coordinates of the point x = ( i - u0) dxy = ( j - v0) dy As long as the point falls within the pixel, the coordinate value is a constant.

32、Ideally, the coordinates of which should be within a certain range, respectively:( i - u0 - 0. 5) dx x ( i - u0 + 0. 5) dx( j - v0 - 0. 5) dy y ( i - v0 + 0. 5) dyIntegrated on the type, then the error should point to Image Recognition ex = 0. 5 - ( x - x )ey = 0. 5 - ( y - y )Where: x, y, respectiv

33、ely, for x, y rounded. Since the existence of image recognition errors, the actual coordinates of image point coordinates and the ideal image point has the following relationship: x =? X + ex, y =? Y + ey, (x, y) as the point of actual physical coordinates, ( ? x,? y) as the ideal image point of phy

34、sical coordinates. Let the measured object point coordinates P = ( X, Y, Z), by equation (1) to calculate the projection of the points in the two cameras the ideal image point coordinates (? X,? Y) and (? ? x ,? y), consider the image recognition error, according to equation (8) come to the actual p

35、ixel coordinates (x, y), (x , y), be substituted into (4), we have been space coordinates of measuring point (X, Y, Z), the measurement error of the measured object point can be expressed as eX = X - X, eY = Y - Y, eZ = Z - Z (9)3 System parameters analysis and experimental results3. 1 Structural pa

36、rameters of the relationship with the precision of the system and mathematical model based on the above derivation process, come to the system structure and system parameters of the relationship between measurement error map. Figure 5 is a specific object point measurement error and the camera angle

37、 2 between the two diagrams. It can be seen from the figure, eX volatility is not changing, eY 2 increases with the rising trend was slow, eZ is more violent change, with the increase of 2 increased significantly.Figure 5 The measurement error is a specific spatial point the angle between the two ax

38、is graphFigure 6 for a certain object point within the coordinate origin with the world average measurement error and the two cameras and two camera angle 2 between the distance between the L map. It can be seen, when a certain distance between the two cameras, the average error with the increase of

39、 2, when the angle between the two cameras is fixed, the average error increases with the distance increasing.Figure 6 Average error and the angle between two axes, the distance between the two cameras chartOverall, structural parameters and precision of the system is a more complex function can be

40、summarized as follows:1) Spacing between the camera and the measurement error is directly proportional to the smaller distance, the smaller the error;2) When the camera is not greater than the angle between the 130 , the measurement error is small, whereas larger. Has high credibility.3. 2 Structura

41、l parameters on the accuracy of camera calibrationIn practice, the need to create two links between CCD coordinates, which need to be two-dimensional CCD calibration, in order to obtain the rotation between the two CCD matrix and translation matrix. Proved the precision of camera calibration paramet

42、ers and system structure also has a very close relationship. Calibration method uses a template based on single-plane calibration strategy, the accuracy was assessed using an assessment based on the length of the checkerboard method. The method also can be used as measurement error of the measuremen

43、t method. Calibration template of 10 to 50 mm length of the checkerboard was repeated experimental results shown in Table 1.Table 1 Structure parameters on the average error of calibration.As can be seen from Table 1, when the camera a certain distance, the calibration error increases with the optic

44、al axis angle increasing when the angle between the fixed axis, the camera calibration error with increasing distance increases. Experimental results demonstrate that when the distance between the two cameras is not more than 500mm, two-axis angle of not more than 60 , the calibration error is small

45、.4 ConclusionIntegrated system parameters on the measurement accuracy and precision of calibration, stereo vision system in the establishment of the two camera optical axis angle and distance between the two cameras should be as small as possible, but in practice, taking into account the ease of vis

46、ual target feature point matching , especially for large-scale real-time moving object tracking, a target is obscured happens, the angle between the two camera optical axis should be chosen between 30 60 . In addition, the target feature point image recognition accuracy to achieve sub-pixel accuracy as possible, try to avoid the loss of good as a mile phenomenon. According to the above set up a stereo vision system is derived, and achieved good experimental results show that the conclusions of this paper has great practical significance, for further in-depth study has laid a solid foundation.

展开阅读全文
相关资源
猜你喜欢
相关搜索
资源标签

当前位置:首页 > 办公文档 > 其他范文


备案号:宁ICP备20000045号-2

经营许可证:宁B2-20210002

宁公网安备 64010402000987号