数字图像处理与边缘检测 中英文翻译.docx

上传人:牧羊曲112 文档编号:3558746 上传时间:2023-03-13 格式:DOCX 页数:23 大小:50.56KB
返回 下载 相关 举报
数字图像处理与边缘检测 中英文翻译.docx_第1页
第1页 / 共23页
数字图像处理与边缘检测 中英文翻译.docx_第2页
第2页 / 共23页
数字图像处理与边缘检测 中英文翻译.docx_第3页
第3页 / 共23页
数字图像处理与边缘检测 中英文翻译.docx_第4页
第4页 / 共23页
数字图像处理与边缘检测 中英文翻译.docx_第5页
第5页 / 共23页
亲,该文档总共23页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述

《数字图像处理与边缘检测 中英文翻译.docx》由会员分享,可在线阅读,更多相关《数字图像处理与边缘检测 中英文翻译.docx(23页珍藏版)》请在三一办公上搜索。

1、数字图像处理与边缘检测 中英文翻译天津职业技术师范大学毕业设计 Digital Image Processing and Edge Detection 1. Digital Image Processing Interest in digital image processing methods stems from two principal applicant- ion areas: improvement of pictorial information for human interpretation; and processing of image data for storage,

2、 transmission, and representation for au- tenuous machine perception. An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that poi

3、nt. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of

4、 which has a particular location and value. These elements are referred to as picture elements, image elements, peels, and pixels. Pixel is the term most widely used to denote the elements of a digital image. Vision is the most advanced of our senses, so it is not surprising that images play the sin

5、gle most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trump, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are no

6、t accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications. There is no general agreement among authors regarding where image processing stops and other re

7、lated areas, such as image analysis and computer vi- son, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definit

8、ion, even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learn

9、ing and being able to make inferences and take actions based on visual inputs. This area itself is a branch of 1 天津职业技术师范大学毕业设计 artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress h

10、aving been much slower than originally anticipated. The area of image analysis (also called image understanding) is in be- teen image processing and computer vision. There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other. However, one usef

11、ul paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and high-level processes. Low-level processes involve primitive opera- tons such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by

12、 the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of indiv

13、idual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higher-level processing involves “making sense” of an ensemble of

14、recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision. Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of indivi

15、dual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple

16、illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer process

17、ing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statem

18、ent “making sense.” As will become evident shortly, 2 天津职业技术师范大学毕业设计 digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value. The areas of application of digital image processing are so varied that some form of organiza

19、tion is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in

20、 use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss

21、briefly how images are generated in these various categories and the areas in which they are applied. Images based on radiation from the EM spectrum are the most familiar, esp. - especially images in the X-ray and visual bands of the spectrum. Electromagnet- ice waves can be conceptualized as propag

22、ating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of mass less particles, each traveling in a wavelike pattern and moving at the speed of light. Each mass less particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If sp

23、ectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rathe

24、r transition smoothly from one to the other. Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling. Image enhancement is among the simple

25、st and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because

26、“it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. 3 天津职业技术师范大学毕业设计 Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective

27、, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” enhancement result. Color image processing is an area that has been g

28、aining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in

29、 an image. Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions. Compression, as the name i

30、mplies, deals with techniques for reducing the storage required saving an image, or the bandwidth required transmitting it. Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Interne

31、t, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard. Morpholog

32、ical processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes. Segmentation procedures partition an image in

33、to its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On

34、 the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed. 4 天津职业技术师范大学毕业设计 Representation and description almost always follow the output of a segmentation stage, which

35、 usually is raw pixel data, constituting either the bound- ray of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that m

36、ust be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, s

37、uch as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so

38、that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another. Recognition is the process that assigns a label (e.g.,

39、“vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects. So far we have said nothing about the need for prior knowledge or about the interaction between the knowle

40、dge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as slim- plea as detailing regions of an image where the information of interest is known to be located, thus limiti

41、ng the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in con- lection

42、with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as op

43、- posed to single-headed arrows linking the processing 5 天津职业技术师范大学毕业设计 modules. 2. Edge detection Edge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in

44、 a digital image at which the image brightness changes sharply or more formally has discontinuities. Although point and line detection certainly are important in any discussion on segmentation, edge detection is by far the most common approach for detecting meaningful disco unties in gray level. Alt

45、hough certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects: 1.focal blur caused by a finite depth-of-field and finite point sp

46、read function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local secularities or antireflections in the vicinity of object edges. A typical edge might for instance be the border between a block of red color and a block of yello

47、w. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line. To illustrate why edge detection is not a trivial task, let us con

48、sider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels. 5 7 6 4 152 148 149 If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences bet

49、ween the adjacent neighboring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges. Hence, to firmly state a specific threshold on how large the intensity change between two neighboring pixels must be for us to 6 天津职业技术师范大学毕业设计 say that there should be an edge between these pixels is not always a simple problem. Indeed, t

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 生活休闲 > 在线阅读


备案号:宁ICP备20000045号-2

经营许可证:宁B2-20210002

宁公网安备 64010402000987号