qianmo123

步骤:

步骤1:建立工程和Spider模板

  • scrapy startproject BaiduStocks
  • cd BaiduStocks
  • scrapy genspider stocks baidu.com
  • 进一步修改spiders/stocks.py

这一步自行完成~

步骤2:编写Spider

  • 配置stocks.py文件
  • 修改对返回页面的处理
  • 修改对新增URL爬取请求的处理(stocks.py)
# -*- coding: utf-8 -*-
import scrapy
import re


class StocksSpider(scrapy.Spider):
    name = \'stocks\'
    start_urls = [\'http://quote.eastmoney.com/stock_list.html\']

    def parse(self, response):
        for href in response.css(\'a::attr(href)\').extract():
            try:
                stock = re.findall(r"[s][hz]\d{6}", href)[0]
                url = \'http://gu.qq.com/\' + stock + \'/gp\'
                yield scrapy.Request(url, callback=self.parse_stock)
            except:
                continue

    def parse_stock(self, response):
        infoDict = {}
        stockName = response.css(\'.title_bg\')
        stockInfo = response.css(\'.col-2.fr\')
        name = stockName.css(\'.col-1-1\').extract()[0]
        code = stockName.css(\'.col-1-2\').extract()[0]
        info = stockInfo.css(\'li\').extract()
        for i in info[:13]:
            key = re.findall(\'>.*?<\', i)[1][1:-1]
            key = key.replace(\'\u2003\', \'\')
            key = key.replace(\'\xa0\', \'\')
            try:
                val = re.findall(\'>.*?<\', i)[3][1:-1]
            except:
                val = \'--\'
            infoDict[key] = val

        infoDict.update({\'股票名称\': re.findall(\'\>.*\<\', name)[0][1:-1] + \
                                 re.findall(\'\>.*\<\', code)[0][1:-1]})
        yield infoDict

其中的key=re.replace(\'\u2003\',\'\'),key=re.replace(\'\xa0\',\'\')分别是为了除去爬取的字符串中的无用部分,如&nbsp等,网页抓取时会因为编码原因转化成\xa0,所以我们需要进行替换,得到较为美观的字符串.

步骤3:编写ITEM Pipelines

  • 配置pipelines.py文件
  • 定义对爬取项(Scrapy Item)的处理类(pipelines.py)
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don\'t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


class ScrapyGupiaoPipeline:
    def process_item(self, item, spider):
        return item

class ScrapyGupiaoPipeline:
    def open_spider(self, spider):
        self.f = open(\'gupiao.txt\', \'w\')

    def close_spider(self, spider):
        self.f.close()

    def process_item(self, item, spider):
        try:
            line = str(dict(item)) + \'\n\'
            self.f.write(line)
        except:
            pass
        return item

步骤四:配置ITEM_PIPELINES选项(settings.py)

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   \'BaiduStocks.pipelines.ScrapyGupiaoPipeline\': 300,
}

 

分类:

技术点:

相关文章: