步骤/目录:
1.介绍
    (1)sitemap
    (2)robots.txt
2.使用scrapy生成sitemap

本文首发于个人博客https://lisper517.top/index.php/archives/50/,转载请注明出处。
本文的目的是介绍sitemap与robots.txt。
本文写作日期为2022年9月14日。运行的平台为win10,编辑器为VS Code。

爬虫写的好,牢房进的早。使用爬虫时不要占用服务器太多流量,否则可能惹祸上身。

笔者的能力有限,也并非CS从业人员,很多地方难免有纰漏或者不符合代码原则的地方,请在评论中指出。

1.介绍

如果感觉这些概念有些抽象,可以访问一些网址看看范例,比如 https://www.baidu.com/robots.txthttps://img.kuaidaili.com/sitemap.xml 等。

(1)sitemap

sitemap是一个站点的网页地图,它可以帮助爬虫,给出哪些网页可供抓取(sitemap应该只包含允许爬取的网页)、和各网址的基础数据(更新时间、更新频率、重要程度等)。因为搜索引擎都需要爬虫来获取网页的详情,用sitemap可以做SEO(search engine optimization,搜索引擎优化),这样baidu、google等各大搜索引擎的爬虫能爬到自己网页的页面,则在搜索关键字时自己网页出现的频率也会升高。sitemap的格式等信息可以到 sitemap官网 查看,下面是一份范例:

<?xml version="1.0" encoding="UTF-8"?>

<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">

   <url>

      <loc>http://www.example.com/</loc>

      <lastmod>2005-01-01</lastmod>

      <changefreq>monthly</changefreq>

      <priority>0.8</priority>

   </url>
   <url>

      <loc>http://www.example.com/.../.../</loc>

      <lastmod>2005-01-01</lastmod>

      <changefreq>monthly</changefreq>

      <priority>0.8</priority>

   </url>
   ...

</urlset> 

一个编写好的sitemap,一般会放在网站的根目录下,命名为sitemap.xml。另外,除了被动等待自己的网页被搜索引擎爬取,还可以自己向搜索引擎提交sitemap;wordpress等模板在建站后会自动生成sitemap。

(2)robots.txt

robots.txt也是一个建议爬虫如何爬取自己网站的文件,同样也放在网站根目录下。它的格式为:

User-agent: *
Allow: /
Disallow: /

Sitemap: http://www.***.com/sitemap.xml

User-agent后面用于指定搜索引擎爬虫,比如Googlebot、Baiduspider、, * 则表示对所有爬虫适用;Allow、Disallow后面跟允许或禁止爬取的目录,只写目录名不用写全网址;Sitemap则是指明自己网站的sitemap在哪里,这个只用写一次(User-agent、Allow和Disallow则可能出现多次)。另外,如果写成:

User-agent: Googlebot
Allow: /

User-agent: *
Disallow: /

就是允许google爬虫访问自己的全部页面,但禁止其他爬虫访问自己的全部页面。这个例子的重点在于前面指明单个爬虫,后面用通配符*表示其他所有爬虫。

2.使用scrapy生成sitemap

首先要提醒,这种方式其实和爬虫软件去爬取你的页面一样。如果你的网站,网页之间内链不多,建议还是从网站的根目录或者数据库里生成sitemap。

安装好python、scrapy环境后,在win10机器上的cmd中使用如下命令:

cd 放scrapy爬虫的目录
scrapy startproject sitemap
cd sitemap
scrapy genspider sitemap_mysite example.com

然后开始编辑scrapy项目。

items.py如下:

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class SitemapItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    url = scrapy.Field()

middlewares.py如下:

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

import random
from scrapy import signals

# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter


class SitemapSpiderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, or item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Request or item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)


class SitemapDownloaderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.
    UA_list = [
        'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36',
        'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.33',
        'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0'
    ]

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        request.headers['User-Agent'] = random.choice(self.UA_list)
        return None

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

pipelines.py如下:

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html

# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
import os


class SitemapPipeline:

    def open_spider(self, spider):
        file_name = 'urls.txt'
        abs_file_path = os.path.join(os.getcwd(), file_name)
        self.fp = open(abs_file_path, 'a', encoding='utf-8')

    def process_item(self, item, spider):
        url = item['url']
        self.fp.write(url + '\n')
        return item

    def close_spider(self, spider):
        self.fp.close()

settings.py如下:

# Scrapy settings for sitemap project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

LOG_LEVEL = 'ERROR'

BOT_NAME = 'sitemap'

SPIDER_MODULES = ['sitemap.spiders']
NEWSPIDER_MODULE = 'sitemap.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'sitemap (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'sitemap.middlewares.SitemapSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
    'sitemap.middlewares.SitemapDownloaderMiddleware': 543,
}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'sitemap.pipelines.SitemapPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

最后是sitemap_mysite.py:

import scrapy, re
from sitemap.items import SitemapItem


class SitemapMysiteSpider(scrapy.Spider):
    name = 'sitemap_mysite'
    #allowed_domains = ['example.com']
    start_urls = ['https://lisper517.top/']

    def parse(self, response):
        print(response.url)
        item = SitemapItem()
        item['url'] = response.url
        yield item
        other_urls = re.findall(r'"https://lisper517.top/.+?"', response.text)
        other_urls = [other_urls[i][1:-1] for i in range(len(other_urls))]
        for other_url in other_urls:
            if not other_url.startswith('https://lisper517.top/resources'):
                yield scrapy.Request(url=other_url, callback=self.parse)

可自行把笔者的网址更改为自己的站点。

在cmd中,使用 scrapy crawl sitemap_mysite 运行这个爬虫,在当前目录下会生成 urls.txt 文件,可以在这里查看自己的网站上有哪些站点。根据这份文件,生成自己的sitemap.xml和robots.txt,放到网站的根目录下即可。

标签: python, scrapy

添加新评论