nutch爬虫系统分析.doc

上传人:文库蛋蛋多 文档编号:2881719 上传时间:2023-03-01 格式:DOC 页数:59 大小:365KB
返回 下载 相关 举报
nutch爬虫系统分析.doc_第1页
第1页 / 共59页
nutch爬虫系统分析.doc_第2页
第2页 / 共59页
nutch爬虫系统分析.doc_第3页
第3页 / 共59页
nutch爬虫系统分析.doc_第4页
第4页 / 共59页
nutch爬虫系统分析.doc_第5页
第5页 / 共59页
点击查看更多>>
资源描述

《nutch爬虫系统分析.doc》由会员分享,可在线阅读,更多相关《nutch爬虫系统分析.doc(59页珍藏版)》请在三一办公上搜索。

1、Nutch分析1Nutch简介21.1nutch体系结构22抓取部分32.1爬虫的数据结构及含义32.2抓取目录分析42.3抓取过程概述42.4抓取过程分析52.4.1inject方法62.4.2generate方法122.4.3fetch 方法142.4.4parse方法162.4.5update方法162.4.6invert方法192.4.7index方法232.4.8dedup方法262.4.9merge方法303配置文件分析313.1nutch-default.xml分析313.1.1313.1.2323.1.3353.1.4373.1.5413.1.6423.1.7433.1.845

2、3.1.9453.1.10453.1.11483.1.12483.1.13493.1.14493.1.15513.1.16523.1.17523.1.18533.1.19533.1.20543.1.21553.1.22553.1.23553.1.24563.1.25563.2regex-urlfilter.txt解析583.3regex-normalize.xml解析583.4总结594参考资源591 Nutch简介1.1 nutch体系结构2 抓取部分2.1 爬虫的数据结构及含义爬虫系统是由Nutch的爬虫工具驱动的。并且把构建和维护一些数据结构类型同一系列工具关联起来:包括web data

3、base、一系列的segment和index。接下来我们将详细描述他们。三者的物理文件分别存储在爬行结果目录下的crawldb文件夹内,segments文件夹和index文件夹内。那么三者分别存储的信息是什么呢?Web database,也叫WebDB,其中存储的是爬虫所抓取网页之间的链接结构信息,它只在爬虫Crawler工作中使用而和Searcher的工作没有任何关系。WebDB内存储了两种实体的信息:page和link。Page实体通过描述网络上一个网页的特征信息来表征一个实际的网页,因为网页有很多个需要描述,WebDB中通过网页的URL和网页内容的MD5两种索引方法对这些网页实体进行了索

4、引。Page实体描述的网页特征主要包括网页内的 link数目,抓取此网页的时间等相关抓取信息,对此网页的重要度评分等。同样的,Link实体描述的是两个page实体之间的链接关系。WebDB构成了一个所抓取网页的链接结构图,这个图中Page实体是图的结点,而Link实体则代表图的边。一次爬行会产生很多个segment,每个segment内存储的是爬虫Crawler在单独一次抓取循环中抓到的网页以及这些网页的索引。 Crawler爬行时会根据WebDB中的link关系按照一定的爬行策略生成每次抓取循环所需的fetchlist,然后Fetcher通过 fetchlist中的URLs抓取这些网页并索引

5、,然后将其存入segment。Segment是有时限的,当这些网页被Crawler重新抓取后,先前抓取产生的segment就作废了。在存储中。Segment文件夹是以产生时间命名的,方便我们删除作废的segments以节省存储空间。Index是Crawler抓取的所有网页的索引,它是通过对所有单个segment中的索引进行合并处理所得的。Nutch利用Lucene技术进行索引,所以Lucene中对索引进行操作的接口对Nutch中的index同样有效。但是需要注意的是,Lucene中的segment和Nutch 中的不同,Lucene中的segment是索引index的一部分,但是Nutch中的

6、segment只是WebDB中各个部分网页的内容和索引,最后通过其生成的index跟这些segment已经毫无关系了。2.2 抓取目录分析抓取后一共生成5个文件夹,分别是:l crawldb目录存放下载的URL,以及下载的日期,用来页面更新检查时间.l linkdb目录存放URL的互联关系,是下载完成后分析得到的.l segments:存放抓取的页面,下面子目录的个数于获取的页面层数有关系,通常每一层页面会独立存放一个子目录,子目录名称为时间,便于管理.比如我这只抓取了一层页面就只生成了20090508173137目录.每个子目录里又有6个子文件夹如下: content:每个下载页面的内容。

7、crawl_fetch:每个下载URL的状态。 crawl_generate:待下载URL集合。 crawl_parse:包含来更新crawldb的外部链接库。 parse_data:包含每个URL解析出的外部链接和元数据 parse_text:包含每个解析过的URL的文本内容。l indexs:存放每次下载的独立索引目录l index:符合Lucene格式的索引目录,是indexs里所有index合并后的完整索引2.3 抓取过程概述引用到的类主要有以下9个:1、nutch.crawl.Inject用来给抓取数据库添加URL的插入器2、nutch.crawl.Generator用来生成待下载任

8、务列表的生成器3、nutch.fetcher.Fetcher完成抓取特定页面的抓取器4、nutch.parse.ParseSegment负责内容提取和对下级URL提取的内容进行解析的解析器5、nutch.crawl.CrawlDb负责数据库管理的数据库管理工具6、nutch.crawl.LinkDb负责链接管理7、nutch.indexer.Indexer负责创建索引的索引器8、nutch.indexer.DeleteDuplicates删除重复数据9、nutch.indexer.IndexMerger对当前下载内容局部索引和历史索引进行合并的索引合并器2.4 抓取过程分析Crawler的工作

9、原理主要是:首先Crawler根据WebDB生成一个待抓取网页的URL集合叫做Fetchlist,接着下载线程Fetcher开始根据 Fetchlist将网页抓取回来,如果下载线程有很多个,那么就生成很多个Fetchlist,也就是一个Fetcher对应一个Fetchlist。然后Crawler根据抓取回来的网页WebDB进行更新,根据更新后的WebDB生成新的Fetchlist,里面是未抓取的或者新发现的URLs,然后下一轮抓取循环重新开始。这个循环过程可以叫做“产生/抓取/更新”循环。指向同一个主机上Web资源的URLs通常被分配到同一个Fetchlist中,这样的话防止过多的Fetche

10、rs对一个主机同时进行抓取造成主机负担过重。另外Nutch遵守Robots Exclusion Protocol,网站可以通过自定义Robots.txt控制Crawler的抓取。在Nutch中,Crawler操作的实现是通过一系列子操作的实现来完成的。这些子操作Nutch都提供了子命令行可以单独进行调用。下面就是这些子操作的功能描述以及命令行,命令行在括号中。1. 创建一个新的WebDb (admin db -create).2. 将抓取起始URLs写入WebDB中 (inject).3. 根据WebDB生成fetchlist并写入相应的segment(generate).4. 根据fetch

11、list中的URL抓取网页 (fetch).5. 根据抓取网页更新WebDb (updatedb).6. 循环进行35步直至预先设定的抓取深度。7. 分析链接关系,生成反向链接.(此步1.0特有,具体作用?)8. 对所抓取的网页进行索引(index).9. 在索引中丢弃有重复内容的网页和重复的URLs (dedup).10. 将segments中的索引进行合并生成用于检索的最终index(merge).Crawler详细工作流程是:在创建一个WebDB之后(步骤1), “产生/抓取/更新”循环(步骤36)根据一些种子URLs开始启动。当这个循环彻底结束,Crawler根据抓取中生成的segme

12、nts创建索引(步骤810)。在进行重复URLs清除(步骤9)之前,每个segment的索引都是独立的(步骤8)。最终,各个独立的segment索引被合并为一个最终的索引index(步骤10)。其中有一个细节问题,Dedup操作主要用于清除segment索引中的重复URLs,但是我们知道,在WebDB中是不允许重复的URL存在的,那么为什么这里还要进行清除呢?原因在于抓取的更新。比方说一个月之前你抓取过这些网页,一个月后为了更新进行了重新抓取,那么旧的segment在没有删除之前仍然起作用,这个时候就需要在新旧segment之间进行除重。下边是在Crawl类设置断点调试每个方法的结果.2.4.

13、1 inject方法描述:初始化爬取的crawldb,读取URL配置文件,把内容注入爬取数据库.首先会找到读取URL配置文件的目录urls.如果没创建此目录,nutch1.0下会报错.得到hadoop处理的临时文件夹:/tmp/hadoop-Administrator/mapred/日志信息如下:2009-05-08 15:41:36,640 INFO Injector - Injector: starting2009-05-08 15:41:37,031 INFO Injector - Injector: crawlDb: 20090508/crawldb2009-05-08 15:41:3

14、7,781 INFO Injector - Injector: urlDir: urls接着设置一些初始化信息.调用hadoop包JobClient.runJob方法,跟踪进入JobClient下的submitJob方法进行提交整个过程.具体原理又涉及到另一个开源项目hadoop的分析,它包括了复杂的MapReduce架构,此处不做分析。查看submitJob方法,首先获得jobid,执行configureCommandLineOptions方法后会在上边的临时文件夹生成一个system文件夹,同时在它下边生成一个job_local_0001文件夹.执行writeSplitsFile后在job

15、_local_0001下生成job.split文件.执行writeXml写入job.xml,然后执行jobSubmitClient.submitJob正式提交整个job流程,日志如下:2009-05-08 15:41:36,640 INFO Injector - Injector: starting2009-05-08 15:41:37,031 INFO Injector - Injector: crawlDb: 20090508/crawldb2009-05-08 15:41:37,781 INFO Injector - Injector: urlDir: urls2009-05-08 15

16、:52:41,734 INFO Injector - Injector: Converting injected urls to crawl db entries.2009-05-08 15:56:22,203 INFO JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=2009-05-08 16:08:20,796 WARN JobClient - Use GenericOptionsParser for parsing the arguments. Applications should

17、 implement Tool for the same.2009-05-08 16:08:20,984 WARN JobClient - No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).2009-05-08 16:24:42,593 INFO FileInputFormat - Total input paths to process : 12009-05-08 16:38:29,437 INFO FileInputFormat - Total i

18、nput paths to process : 12009-05-08 16:38:29,546 INFO MapTask - numReduceTasks: 12009-05-08 16:38:29,562 INFO MapTask - io.sort.mb = 1002009-05-08 16:38:29,687 INFO MapTask - data buffer = 79691776/996147202009-05-08 16:38:29,687 INFO MapTask - record buffer = 262144/3276802009-05-08 16:38:29,718 IN

19、FO PluginRepository - Plugins: looking in: D:workworkspacenutch_crawlbinplugins2009-05-08 16:38:29,921 INFO PluginRepository - Plugin Auto-activation mode: true2009-05-08 16:38:29,921 INFO PluginRepository - Registered Plugins:2009-05-08 16:38:29,921 INFO PluginRepository - the nutch core extension

20、points (nutch-extensionpoints)2009-05-08 16:38:29,921 INFO PluginRepository - Basic Query Filter (query-basic)2009-05-08 16:38:29,921 INFO PluginRepository - Basic URL Normalizer (urlnormalizer-basic)2009-05-08 16:38:29,921 INFO PluginRepository - Basic Indexing Filter (index-basic)2009-05-08 16:38:

21、29,921 INFO PluginRepository - Html Parse Plug-in (parse-html)2009-05-08 16:38:29,921 INFO PluginRepository - Site Query Filter (query-site)2009-05-08 16:38:29,921 INFO PluginRepository - Basic Summarizer Plug-in (summary-basic)2009-05-08 16:38:29,921 INFO PluginRepository - HTTP Framework (lib-http

22、)2009-05-08 16:38:29,921 INFO PluginRepository - Text Parse Plug-in (parse-text)2009-05-08 16:38:29,921 INFO PluginRepository - Pass-through URL Normalizer (urlnormalizer-pass)2009-05-08 16:38:29,921 INFO PluginRepository - Regex URL Filter (urlfilter-regex)2009-05-08 16:38:29,921 INFO PluginReposit

23、ory - Http Protocol Plug-in (protocol-http)2009-05-08 16:38:29,921 INFO PluginRepository - XML Response Writer Plug-in (response-xml)2009-05-08 16:38:29,921 INFO PluginRepository - Regex URL Normalizer (urlnormalizer-regex)2009-05-08 16:38:29,921 INFO PluginRepository - OPIC Scoring Plug-in (scoring

24、-opic)2009-05-08 16:38:29,921 INFO PluginRepository - CyberNeko HTML Parser (lib-nekohtml)2009-05-08 16:38:29,921 INFO PluginRepository - Anchor Indexing Filter (index-anchor)2009-05-08 16:38:29,921 INFO PluginRepository - JavaScript Parser (parse-js)2009-05-08 16:38:29,921 INFO PluginRepository - U

25、RL Query Filter (query-url)2009-05-08 16:38:29,921 INFO PluginRepository - Regex URL Filter Framework (lib-regex-filter)2009-05-08 16:38:29,921 INFO PluginRepository - JSON Response Writer Plug-in (response-json)2009-05-08 16:38:29,921 INFO PluginRepository - Registered Extension-Points:2009-05-08 1

26、6:38:29,921 INFO PluginRepository - Nutch Summarizer (org.apache.nutch.searcher.Summarizer)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Protocol (org.apache.nutch.protocol.Protocol)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Analysis (org.apache.nutch.analysis.NutchAnalyzer)2009-0

27、5-08 16:38:29,921 INFO PluginRepository - Nutch Field Filter (org.apache.nutch.indexer.field.FieldFilter)2009-05-08 16:38:29,921 INFO PluginRepository - HTML Parse Filter (org.apache.nutch.parse.HtmlParseFilter)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Query Filter (org.apache.nutch.sear

28、cher.QueryFilter)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Search Results Response Writer (org.apache.nutch.searcher.response.ResponseWriter)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch URL Normalizer (.URLNormalizer)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch URL Filt

29、er (.URLFilter)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Online Search Results Clustering Plugin (org.apache.nutch.clustering.OnlineClusterer)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Indexing Filter (org.apache.nutch.indexer.IndexingFilter)2009-05-08 16:38:29,921 INFO Plugin

30、Repository - Nutch Content Parser (org.apache.nutch.parse.Parser)2009-05-08 16:38:29,921 INFO PluginRepository - Nutch Scoring (org.apache.nutch.scoring.ScoringFilter)2009-05-08 16:38:29,921 INFO PluginRepository - Ontology Model Loader (org.apache.nutch.ontology.Ontology)2009-05-08 16:38:29,968 INF

31、O Configuration - found resource crawl-urlfilter.txt at file:/D:/work/workspace/nutch_crawl/bin/crawl-urlfilter.txt2009-05-08 16:38:29,984 WARN RegexURLNormalizer - cant find rules for scope inject, using default2009-05-08 16:38:29,984 INFO MapTask - Starting flush of map output2009-05-08 16:38:30,2

32、03 INFO MapTask - Finished spill 02009-05-08 16:38:30,203 INFO TaskRunner - Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting2009-05-08 16:38:30,218 INFO LocalJobRunner - file:/D:/work/workspace/nutch_crawl/urls/site.txt:0+192009-05-08 16:38:30,218 INFO TaskRunner - Task

33、 attempt_local_0001_m_000000_0 done.2009-05-08 16:38:30,234 INFO LocalJobRunner - 2009-05-08 16:38:30,250 INFO Merger - Merging 1 sorted segments2009-05-08 16:38:30,265 INFO Merger - Down to the last merge-pass, with 1 segments left of total size: 53 bytes2009-05-08 16:38:30,265 INFO LocalJobRunner

34、- 2009-05-08 16:38:30,390 INFO TaskRunner - Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting2009-05-08 16:38:30,390 INFO LocalJobRunner - 2009-05-08 16:38:30,390 INFO TaskRunner - Task attempt_local_0001_r_000000_0 is allowed to commit now2009-05-08 16:38:30,406 INFO Fi

35、leOutputCommitter - Saved output of task attempt_local_0001_r_000000_0 to file:/tmp/hadoop-Administrator/mapred/temp/inject-temp-4741923042009-05-08 16:38:30,406 INFO LocalJobRunner - reduce reduce2009-05-08 16:38:30,406 INFO TaskRunner - Task attempt_local_0001_r_000000_0 done.执行完后返回的running值如下:Job

36、: job_local_0001file: file:/tmp/hadoop-Administrator/mapred/system/job_local_0001/job.xmltracking URL: http:/localhost:8080/2009-05-08 16:47:14,093 INFO JobClient - Running job: job_local_00012009-05-08 16:49:51,859 INFO JobClient - Job complete: job_local_00012009-05-08 16:51:36,062 INFO JobClient

37、- Counters: 112009-05-08 16:51:36,062 INFO JobClient - File Systems2009-05-08 16:51:36,062 INFO JobClient - Local bytes read=515912009-05-08 16:51:36,062 INFO JobClient - Local bytes written=1043372009-05-08 16:51:36,062 INFO JobClient - Map-Reduce Framework2009-05-08 16:51:36,062 INFO JobClient - R

38、educe input groups=12009-05-08 16:51:36,062 INFO JobClient - Combine output records=02009-05-08 16:51:36,062 INFO JobClient - Map input records=12009-05-08 16:51:36,062 INFO JobClient - Reduce output records=12009-05-08 16:51:36,062 INFO JobClient - Map output bytes=492009-05-08 16:51:36,062 INFO Jo

39、bClient - Map input bytes=192009-05-08 16:51:36,062 INFO JobClient - Combine input records=02009-05-08 16:51:36,062 INFO JobClient - Map output records=12009-05-08 16:51:36,062 INFO JobClient - Reduce input records=1至此第一个runJob方法执行结束.总结:待写接下来就是生成crawldb文件夹,并把urls合并注入到它的里面.JobClient.runJob(mergeJob);

40、CrawlDb.install(mergeJob, crawlDb);这个过程首先会在前面提到的临时文件夹下生成job_local_0002目录,和上边一样同样会生成job.split和job.xml,接着完成crawldb的创建,最后删除临时文件夹temp下的文件.至此inject过程结束.最后部分日志如下:2009-05-08 17:03:57,250 INFO Injector - Injector: Merging injected urls into crawl db.2009-05-08 17:10:01,015 INFO JvmMetrics - Cannot initializ

41、e JVM Metrics with processName=JobTracker, sessionId= - already initialized2009-05-08 17:10:15,953 WARN JobClient - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2009-05-08 17:10:16,156 WARN JobClient - No job jar file set. User classes may not b

42、e found. See JobConf(Class) or JobConf#setJar(String).2009-05-08 17:12:15,296 INFO FileInputFormat - Total input paths to process : 12009-05-08 17:13:40,296 INFO FileInputFormat - Total input paths to process : 12009-05-08 17:13:40,406 INFO MapTask - numReduceTasks: 12009-05-08 17:13:40,406 INFO Map

43、Task - io.sort.mb = 1002009-05-08 17:13:40,515 INFO MapTask - data buffer = 79691776/996147202009-05-08 17:13:40,515 INFO MapTask - record buffer = 262144/3276802009-05-08 17:13:40,546 INFO MapTask - Starting flush of map output2009-05-08 17:13:40,765 INFO MapTask - Finished spill 02009-05-08 17:13:

44、40,765 INFO TaskRunner - Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting2009-05-08 17:13:40,765 INFO LocalJobRunner - file:/tmp/hadoop-Administrator/mapred/temp/inject-temp-474192304/part-00000:0+1432009-05-08 17:13:40,765 INFO TaskRunner - Task attempt_local_0002_m_000000_0 done.2009-05-08 17:13:40,796 INFO LocalJobRunner - 2009-05-08 17:13:40,796 INFO Merger - Mergi

展开阅读全文
相关资源
猜你喜欢
相关搜索
资源标签

当前位置:首页 > 建筑/施工/环境 > 项目建议


备案号:宁ICP备20000045号-2

经营许可证:宁B2-20210002

宁公网安备 64010402000987号