天天看点

nutch solr系列之(二)nutch命令分析

1、$ ./nutch readdb crawlDir/crawldb/ -stats 此命令用来统计crawldb中链接的数量,以及fetch过的链接数量和未fetch过的链接数量

[email protected] /home/apache-nutch-1.9/bin $ ./nutch readdb crawlDir/crawldb/ -stats CrawlDb statistics start: crawlDir/crawldb/ Statistics for CrawlDb: crawlDir/crawldb/ TOTAL urls: 3568 retry 0: 3567 retry 1: 1 min score: 0.0 avg score: 8.7107625E-4 max score: 1.133 status 1 (db_unfetched): 2982 status 2 (db_fetched): 586 CrawlDb statistics: done

2、$ ./nutch readdb crawlDir/crawldb/ -dump crawldb 此命令用户导出链接的信息到crawldb文件夹中,里面记录了每个链接的详细信息

[email protected] /home/apache-nutch-1.9/bin $ ./nutch readdb crawlDir/crawldb/ -dump crawldb CrawlDb dump: starting CrawlDb db: crawlDir/crawldb/ CrawlDb dump: done

3、$ ./nutch readdb crawlDir/crawldb/ -url http://cs.fang.lianjia.com/ 此命令用来分析某一个链接的信息

[email protected] /home/apache-nutch-1.9/bin $ ./nutch readdb crawlDir/crawldb/ -url http://cs.fang.lianjia.com/ URL: http://cs.fang.lianjia.com/ Version: 7 Status: 2 (db_fetched) Fetch time: Tue Nov 14 21:37:39 CST 2017 Modified time: Thu Jan 01 08:00:00 CST 1970 Retries since fetch: 0 Retry interval: 2592000 seconds (30 days) Score: 1.1338482 Signature: dc19d8253ee5b3af82535b28e422d45a Metadata: _pst_=success(1), lastModified=0 _rs_=344 Content-Type=text/html

4、$ ./nutch readlinkdb crawlDir/linkdb/ -dump linkdb 此命令用来导出网页所有的链接(这次实验中没有产生网页的链接)

[email protected] /home/apache-nutch-1.9/bin $ ./nutch readlinkdb crawlDir/linkdb/ -dump linkdb LinkDb dump: starting at 2017-10-17 20:05:22 LinkDb dump: db: crawlDir/linkdb/ LinkDb dump: finished at 2017-10-17 20:05:24, elapsed: 00:00:01

5、$ ./nutch readseg -list -dir crawlDir/segments/ 此命令用来读取segments的统计信息

[email protected] /home/apache-nutch-1.9/bin $ ./nutch readseg -list -dir crawlDir/segments/ NAME GENERATED FETCHER START FETCHER END FETCHED PARSED 20171015213734 1 2017-10-15T21:37:39 2017-10-15T21:37:39 1 1 20171015213808 50 2017-10-15T21:38:14 2017-10-15T21:42:55 50 50 20171015214329 536 2017-10-15T21:43:35 2017-10-15T22:35:05 536 535

6、 $ ./nutch readseg -dump crawlDir/segments/20171015213734 segdb12 -locale zh_CN 或者$ ./nutch readseg -dump crawlDir/segments/20171015213734 segdb12 -locale zh_CN 参数可以省略 此命令为将segments下面的内容导出为纯文件(包含网页内容)到文件夹segdb12 下(此次实验中部分中文乱码)

[email protected] /home/apache-nutch-1.9/bin $ ./nutch readseg -dump crawlDir/segments/20171015213734 segdb12 -locale zh_CN SegmentReader: dump segment: crawlDir/segments/20171015213734 SegmentReader: done

通过这个可以看到segment文件内容分为CrawlDatum、Content、ParseData、ParseText四部分 CrawlDatum: 保存的是抓取的基本信息,相当于查看crawldb数据库时所的到的信息,对应于generate/fetch/update循环中的update环节 Content: 保存的是fetcher所抓取回来的源内容,也就是Html脚本(默认是由protocol-httpclient插件来处理的),可以直接查看网页进行对比 ParseData和ParseText: 这两部分就是解析内容,通过使用合适的解析器解析插件(这里就是parst-html),将源内容进行解析,用于indexes产生对应的索引

继续阅读