網絡爬蟲是一個自動提取網頁的程式,它為搜尋引擎從網際網路上下載下傳網頁,是搜尋引擎的重要組成,其基本架構如下圖所示:

傳統爬蟲從一個或若幹初始網頁的URL開始,獲得初始網頁上的URL,在抓取網頁的過程中,不斷從目前頁面上抽取新的URL放入隊列,直到滿足系統的一定停止條件。對于垂直搜尋來說,聚焦爬蟲,即有針對性地爬取特定主題網頁的爬蟲,更為适合。
本文爬蟲程式的核心代碼如下:
Java代碼
- public void crawl() throws Throwable {
- while (continueCrawling()) {
- CrawlerUrl url = getNextUrl(); //擷取待爬取隊列中的下一個URL
- if (url != null) {
- printCrawlInfo();
- String content = getContent(url); //擷取URL的文本資訊
- //聚焦爬蟲隻爬取與主題内容相關的網頁,這裡采用正則比對簡單處理
- if (isContentRelevant(content, this.regexpSearchPattern)) {
- saveContent(url, content); //儲存網頁至本地
- //擷取網頁内容中的連結,并放入待爬取隊列中
- Collection urlStrings = extractUrls(content, url);
- addUrlsToUrlQueue(url, urlStrings);
- } else {
- System.out.println(url + " is not relevant ignoring ...");
- }
- //延時防止被對方屏蔽
- Thread.sleep(this.delayBetweenUrls);
- }
- }
- closeOutputStream();
- }
整個函數由getNextUrl、getContent、isContentRelevant、extractUrls、addUrlsToUrlQueue等幾個核心方法組成,下面将一一介紹。先看getNextUrl:
Java代碼
- private CrawlerUrl getNextUrl() throws Throwable {
- CrawlerUrl nextUrl = null;
- while ((nextUrl == null) && (!urlQueue.isEmpty())) {
- CrawlerUrl crawlerUrl = this.urlQueue.remove();
- //doWeHavePermissionToVisit:是否有權限通路該URL,友好的爬蟲會根據網站提供的"Robot.txt"中配置的規則進行爬取
- //isUrlAlreadyVisited:URL是否通路過,大型的搜尋引擎往往采用BloomFilter進行排重,這裡簡單使用HashMap
- //isDepthAcceptable:是否達到指定的深度上限。爬蟲一般采取廣度優先的方式。一些網站會建構爬蟲陷阱(自動生成一些無效連結使爬蟲陷入死循環),采用深度限制加以避免
- if (doWeHavePermissionToVisit(crawlerUrl)
- && (!isUrlAlreadyVisited(crawlerUrl))
- && isDepthAcceptable(crawlerUrl)) {
- nextUrl = crawlerUrl;
- // System.out.println("Next url to be visited is " + nextUrl);
- }
- }
- return nextUrl;
- }
更多的關于robot.txt的具體寫法,可參考以下這篇文章:
http://www.bloghuman.com/post/67/
getContent内部使用apache的httpclient 4.1擷取網頁内容,具體代碼如下:
Java代碼
- private String getContent(CrawlerUrl url) throws Throwable {
- //HttpClient4.1的調用與之前的方式不同
- HttpClient client = new DefaultHttpClient();
- HttpGet httpGet = new HttpGet(url.getUrlString());
- StringBuffer strBuf = new StringBuffer();
- HttpResponse response = client.execute(httpGet);
- if (HttpStatus.SC_OK == response.getStatusLine().getStatusCode()) {
- HttpEntity entity = response.getEntity();
- if (entity != null) {
- BufferedReader reader = new BufferedReader(
- new InputStreamReader(entity.getContent(), "UTF-8"));
- String line = null;
- if (entity.getContentLength() > 0) {
- strBuf = new StringBuffer((int) entity.getContentLength());
- while ((line = reader.readLine()) != null) {
- strBuf.append(line);
- }
- }
- }
- if (entity != null) {
- entity.consumeContent();
- }
- }
- //将url标記為已通路
- markUrlAsVisited(url);
- return strBuf.toString();
- }
對于垂直型應用來說,資料的準确性往往更為重要。聚焦型爬蟲的主要特點是,隻收集和主題相關的資料,這就是isContentRelevant方法的作用。這裡或許要使用分類預測技術,為簡單起見,采用正則比對來代替。其主要代碼如下:
Java代碼
- public static boolean isContentRelevant(String content,
- Pattern regexpPattern) {
- boolean retValue = false;
- if (content != null) {
- //是否符合正規表達式的條件
- Matcher m = regexpPattern.matcher(content.toLowerCase());
- retValue = m.find();
- }
- return retValue;
- }
extractUrls的主要作用,是從網頁中擷取更多的URL,包括内部連結和外部連結,代碼如下:
Java代碼
- http://developer.51cto.com/art/201103/248141.htm
- public List extractUrls(String text, CrawlerUrl crawlerUrl) {
- Map urlMap = new HashMap();
- extractHttpUrls(urlMap, text);
- extractRelativeUrls(urlMap, text, crawlerUrl);
- return new ArrayList(urlMap.keySet());
- }
- //處理外部連結
- private void extractHttpUrls(Map urlMap, String text) {
- Matcher m = httpRegexp.matcher(text);
- while (m.find()) {
- String url = m.group();
- String[] terms = url.split("a href=\"");
- for (String term : terms) {
- // System.out.println("Term = " + term);
- if (term.startsWith("http")) {
- int index = term.indexOf("\"");
- if (index > 0) {
- term = term.substring(0, index);
- }
- urlMap.put(term, term);
- System.out.println("Hyperlink: " + term);
- }
- }
- }
- }
- //處理内部連結
- private void extractRelativeUrls(Map urlMap, String text,
- CrawlerUrl crawlerUrl) {
- Matcher m = relativeRegexp.matcher(text);
- URL textURL = crawlerUrl.getURL();
- String host = textURL.getHost();
- while (m.find()) {
- String url = m.group();
- String[] terms = url.split("a href=\"");
- for (String term : terms) {
- if (term.startsWith("/")) {
- int index = term.indexOf("\"");
- if (index > 0) {
- term = term.substring(0, index);
- }
- String s = "http://" + host + term;
- urlMap.put(s, s);
- System.out.println("Relative url: " + s);
- }
- }
- }
- }
如此,便建構了一個簡單的網絡爬蟲程式,可以使用以下程式來測試它:
Java代碼
- public static void main(String[] args) {
- try {
- String url = "http://www.amazon.com";
- Queue urlQueue = new LinkedList();
- String regexp = "java";
- urlQueue.add(new CrawlerUrl(url, 0));
- NaiveCrawler crawler = new NaiveCrawler(urlQueue, 100, 5, 1000L,
- regexp);
- // boolean allowCrawl = crawler.areWeAllowedToVisit(url);
- // System.out.println("Allowed to crawl: " + url + " " +
- // allowCrawl);
- crawler.crawl();
- } catch (Throwable t) {
- System.out.println(t.toString());
- t.printStackTrace();
- }
- }
當然,你可以為它賦予更為進階的功能,比如多線程、更智能的聚焦、結合Lucene建立索引等等。更為複雜的情況,可以考慮使用一些開源的蜘蛛程式,比如Nutch或是Heritrix等等,就不在本文的讨論範圍了。