角点和边缘检测翻译.doc

上传人:laozhun 文档编号:3992436 上传时间:2023-03-30 格式:DOC 页数:18 大小:4.80MB
返回 下载 相关 举报
角点和边缘检测翻译.doc_第1页
第1页 / 共18页
角点和边缘检测翻译.doc_第2页
第2页 / 共18页
角点和边缘检测翻译.doc_第3页
第3页 / 共18页
角点和边缘检测翻译.doc_第4页
第4页 / 共18页
角点和边缘检测翻译.doc_第5页
第5页 / 共18页
点击查看更多>>
资源描述

《角点和边缘检测翻译.doc》由会员分享,可在线阅读,更多相关《角点和边缘检测翻译.doc(18页珍藏版)》请在三一办公上搜索。

1、济南大学泉城学院毕业设计外文资料翻译题 目 角点和边缘检测 专 业 电气工程及其自动化 班 级 电气三班 学 生 王军 学 号 20063005072 指导教师 魏军 二一年四月二十三日A COMBINED CORNER AND EDGE DETECTORChris Harris & Mike Stephens Plessey Research Roke Manor, United Kingdom The Plessey Company pic. 1988Consistency of image edge filtering is of prime importance for 3D inte

2、rpretation of image sequences using feature tracking algorithms. To cater for image regions containing texture and isolated features, a combined corner and edge detector based on the local auto-correlation function is utilised, and it is shown to perform with good consistency on natural imagery.INTR

3、ODUCTIONThe problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an

4、understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from

5、 a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed.To enable explicit tracking of image features to be performed, the image features must be discrete, and not form a continuum like texture, or edge pixels (edgels).

6、 For this reason, our earlier work1 has concentrated on the extraction and tracking of feature-points or corners, since they are discrete, reliable and meaningful2. However, the lack of connectivity of feature-points is a major limitation in our obtaining higher level descriptions, such as surfaces

7、and objects. We need the richer information that is available from edges3.THE EDGE TRACKING PROBLEMMatching between edge images on a pixel-by-pixel basis works for stereo, because of the known epi-polar camera geometry. However for the motion problem, where the camera motion is unknown, the aperture

8、 problem prevents us from undertaking explicit edgel matching. This could be overcome by solving for the motion beforehand, but we are still faced with the task of tracking each individual edge pixel and estimating its 3D location from, for example, Kalman Filtering. This approach is unattractive in

9、 comparison with assembling the edgels into edge segments, and tracking these segments as the features.Now, the unconstrained imagery we shall be considering will contain both curved edges and texture of various scales. Representing edges as a set of straight line fragments4, and using these as our

10、discrete features will be inappropriate, since curved lines and texture edges can be expected to fragment differently on each image of the sequence, and so be untrackable. Because of illconditioning,the use of parametrised curves (eg. Circular arcs) cannot be expected to provide the solution, especi

11、ally with real imagery.Figure 1. Pair of images from an outdoor sequence.Having found fault with the above solutions to the problem of 3D edge interpretation, we question the necessity of trying to solve the problem at all! Psychovisual experiments (the ambiguity of interpretation in viewing a rotat

12、ing bent coat-hanger in silhouette), show that the problem of 3D interpretation of curved edges may indeed be effectively insoluble. This problem seldom occurs in reality because of the existence of small imperfections and markings on the edge which act as trackable feature-points.Although an accura

13、te, explicit 3D representation of a curving edge may be unobtainable, the connectivity it provides may be sufficient for many purposes - indeed the edge connectivity may be of more importance than explicit 3D measurements. Tracked edge connectivity, supplemented by 3D locations of corners and juncti

14、ons, can provide both a wire-frame structural representation, and delimited image regions which can act as putative 3D surfaces. This leaves us with the problem of performing reliable (ie. consistent) edge filtering. The state-of-the-art edge filters, such as5, are not designed to cope with junction

15、s and corners, and are reluctant to provide any edge connectivity. This is illustrated in Figure 2 for the Canny edge operator,where the above- and below-threshold edgels are represented respectively in black and grey. Note that in the bushes, some, but not all, of the edges are readily matchable by

16、 eye. After hysteresis has been undertaken, followed by the deletion of spurs and short edges, the application of a junction completion algorithm results in the edges and junctions shown in Figure 3, edges being shown in grey, and junctions in black. In the bushes, very few of the edges are now read

17、ily matched. The problem here is that of edges with responses close to the detection threshold: a small change in edge strength or in the pixellation causes a large change in the edge topology. The use of edges to describe the bush is suspect, and it is perhaps better to describe it in terms of feat

18、ure-points alone.Figure 2. Unlinked Canny edges for the outdoor imagesFigure 3. Linked Canny edges for the outdoor imagesThe solution to this problem is to attempt to detect both edges and corners in the image: junctions would then consist of edges meeting at corners. To pursue this approach, we sha

19、ll start from Moravecs corner detector6.MORAVEC REVISITEDMoravecs corner detector functions by considering a local window in the image, and determining the average changes of image intensity that result from shifting the window by a small amount in various directions. Three cases need to be consider

20、ed:A. If the windowed image patch is flat (ie. Approximately constant in intensity), then all shifts will result in only a small change;B. If the window straddles an edge, then a shift along the edge will result in a small change, but a shift perpendicular to the edge will result in a large change;C

21、. If the windowed patch is a corner or isolated point, then all shifts will result in a large change. A corner can thus be detected by finding when the minimum change produced by any of the shifts is large. We now give a mathematical specification of the above. Denoting the image intensities by I, t

22、he change E produced by a shift (x,y) is given by:Ex,y=where w specifies the image window: it is unity within a specified rectangular region, and zero elsewhere. The shifts, (x,y), that are considered comprise (1,0), (1,1), (0,1),(-1,1). Thus Moravecs corner detector is simply this: look for local m

23、axima in minE above some threshold value.Figure 4. Corner detection on a test imageAUTO-CORRELATION DETECTORThe performance of Moravecs corner detector on a test image is shown in Figure 4a; for comparison are shown the results of the Beaudet7 and Kitchen & Rosenfeld8 operators (Figures 4b and 4c re

24、spectively). The Moravec operator suffers from a number of problems; these are listed below, together with appropriate corrective measures:1. The response is anisotropic because only a discrete set of shifts at every 45 degrees is considered - all possible small shifts can be covered by performing a

25、n analytic expansion about the shift origin:Ex,y= =where the first gradients are approximated byX = IY=IHence, for small shifts, E can be writtenE(x,y)=Ax2+2Cxy+By2whereA=X2wB=Y2wC=(XY)w2. The response is noisy because the window is binary and rectangular - use a smooth circular window, for example

26、a Gaussian:Wu,v=3. The operator responds too readily to edges because only the minimum of E is taken into account - reformulate the corner measure to make use of the variation of E with the direction of shift.The change, E, for the small shift (x,y) can be concisely written asE(x,y)=(x,y) M (x,y)Twh

27、ere the 2x2 symmetric matrix M isM=Note that E is closely related to the local autocorrelation function, with M describing its shape at the origin (explicitly, the quadratic terms in the Taylor expansion). Let a, b be the eigenvalues of M. a and b will be proportional to the principal curvatures of

28、the local auto-correlation function, and form a rotationally invariant description of M. As before, there are three cases to be considered:A. If both curvatures are small, so that the local autocorrelation function is flat, then the windowed image region is of approximately constant intensity (ie.ar

29、bitrary shifts of the image patch cause little change in E);B. If one curvature is high and the other low, so that the local auto-correlation function is ridge shaped, then only shifts along the ridge (ie. along the edge) cause little change in E: this indicates an edge;C. If both curvatures are hig

30、h, so that the local autocorrelation function is sharply peaked, then shifts in any direction will increase E: this indicates a corner.Figure 5. Auto-correlation principal curvature spaceheavy lines give corner/edge/flat classification, fine lines are equi-response contours.Consider the graph of (,)

31、 space. An ideal edge will havelarge andzero (this will be a surface of translation),but in realitywill merely be small in comparison to,due to noise, pixellation and intensity quantisation. A corner will be indicated by bothandbeing large, and a flat image region by bothandbeing small. Since an inc

32、rease of image contrast by a factor of p will increaseandproportionately by p2 , then if (,) is deemed to belong in an edge region, then so should (p2,p2), for positive values of p. Similar considerations apply to corners. Thus (,) space needs to be divided as shown by the heavy lines in Figure 5.CO

33、RNER/EDGE RESPONSE FUNCTIONNot only do we need corner and edge classification regions.but also a measure of corner and edge quality or response .The size of the response will be used to select isolated corner pixels and to thin the edge pixels.Let us first consider the measure of corner response, R,

34、which we require to be a function of andalone, on grounds of rotational invariance. It is attractive to use Tr(M) and Det(M) in the formulation, as this avoids the explicit eigenvalue decomposition of M, thusTrDetConsider the following inspired formulation for the corner responseR=DetContours of con

35、stant R are shown by the fine lines in Figure 5. R is positive in the corner region, negative in the edge regions, and small in Hit flat region. Note that increasing the contrast (ie. moving radially away from the origin) in all cases increases the magnitude of the response. The flat region is speci

36、fied by Tr falling below some selected threshold.Figure 6. Edge/corner classification for the outdoor images(grey = corner regions, white = thinned edges).Figure 7. Completed edges for the outdoor images(white = corners, black = edges).A corner region pixel (ie. one with a positive response) is sele

37、cted as a nominated corner pixel if its response is an 8-way local maximum: corners so detected in the test image are shown in Figure 4d. Similarly, edge region pixels are deemed to be edgels if their responses are both negative and local minima in either the x or y directions, according to whether

38、the magnitude of the first gradient in the x or y direction respectively is the larger. This results in thin edges. The raw edge/corner classification is shown in Figure 6, with black indicating corner regions, and grey,the thinned edges.By applying low and high thresholds, edge hysteresis can be ca

39、rried out, and this can enhance the continuity of edges. These classifications thus result in a 5-level image comprising: background, two corner classes and two edge classes. Further processing (similar to junction completion) will delete edge spurs and short isolated edges, and bridge short breaks

40、in edges. This results in continuous thin edges that generally terminate in the corner regions. The edge terminators are then linked to the corner pixels residing within the corner regions, to form a connected edge-vertex graph, as shown in Figure 7. Note that many of the corners in the bush are unc

41、onnected to edges, as they reside in essentially textural regions.Although not readily apparent from the Figure, many of the corners and edges are directly matchable. Further work remains to be undertaken concerning the junction completion algorithm, which is currently quite rudimentary, and in the

42、area of adaptive thresholding.ACKNOWLEDGMENTSThe authors gratefully acknowledge the use of imagery supplied by Mr J Sherlock of RSRE, and of the results(the comparison of corner operators, Figure 4) obtained under MOD contract. The grey-level images used in this paper are subject to the following co

43、pyright: Copyright Controller HMSO London 1988.REFERENCES1. Harris, C G & J M Pike, 3D Positional Integration from Image Sequences, Proceedings third Alvey Vision Conference (AVC87), pp. 233-236,1987; reproduced in Image and Vision Computing, vol 6, no 2, pp. 87-90,May 1988.2. Charnley, D & R J Blis

44、sett, Surface Reconstruction from Outdoor Image Sequences, Proceedings fourth Alvey Vision Club (AVC88), 1988.3. Stephens, M J & C G Harris, 3D Wire-Frame Integration from Image Sequences, Proceedings fourth Alvey Vision Club (AVC88), 1988.4. Ayache, N & F Lustman, Fast and Reliable Passive Trinocul

45、ar Stereovision, Proceedings first ICCV, 1987.5. Canny, J F, Finding Edges and Lines in Images, MIT technical report AI-TR-720,1983.6. Moravec, H, Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover, Tech Report CMU-RI-TR-3, Carnegie-Mellon University, Robotics Institute, Sep

46、tember 1980.7. Beaudet, P R, Rotationally Invariant Image Operators, International Joint Conference on Pattern Recognition,pp. 579-583 (1987).8. Kitchen, L, and A. Rosenfeld, Grey-level CornerDetection, Pattern Recognition Letters, 1, pp. 95-102(1982).角点和边缘检测Chris Harris & Mike Stephens普莱塞马诺尔研究,英国本普

47、勒单片机公司1988年摘要 三维图像判读序列的使用功能跟踪算法最重要的是图像滤波的边缘一致性。为配合图像包含区域纹理和孤立的特点,联合角落边探测器根据区域自相关函数的使用,并在自然意象上显示出良好的一致性。关键词 图像特征,边缘,角点,像素,检测器引言现在的问题是用阿尔维项目MMI149处理是利用计算机视觉理解不受约束的三维世界,其中看到的场景,一般含有太宽的自上而下的对象多样性识别技术工作。例如,我们希望获得一个自然景观的了解,包含道路,建筑物,树木,灌木等作为由图1所示的一个序列的两个框架的典型。对于这个问题,我们所追求的解决办法是利用计算机视觉系统分析一个单眼图像序列的移动摄像机的运动。

48、通过提取及图像特征,三维模拟的这些功能表现可以被构建。为了让图像特征明确要执行跟踪,图像功能必须是离散的,而不是形成一般的质感,或边缘像素(edgels)连续。为此,我们早先工作1集中的提取和跟踪的特征点或角落,因为它们是独立的,可靠的和有意义的2。然而,缺乏的功能连接点,在获得更高水平的描述时,是一个主要限制,例如表面和对象。我们需要更丰富的信息可从边缘3得到。跟踪问题的边缘像素匹配由像素图像边缘之间的基础立体工程,因为已知的外延极相机几何。不过,对于运动的问题,其中相机的动作是未知的,孔径问题会阻止我们进行明确的边缘像素匹配。这可能必须在解决议案前解决,但我们仍然面临着每个独立的像素边缘跟踪任务并估计其三维位置出处,例如,卡尔曼滤波。这种方法与组装成的边缘像素段以及跟踪这些部门的特点比较是没有优势的。现在,我们将要考虑到的无约束图像,将包含两个弧形边缘和不同尺度的纹理。用直线集4作为边缘代表,并利用这些特征作为我们的离散特征是不合适的,因为弯曲的线条和纹理的边缘,

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 办公文档 > 其他范文


备案号:宁ICP备20000045号-2

经营许可证:宁B2-20210002

宁公网安备 64010402000987号