千锋教育-做有情怀、有良心、有品质的职业教育机构

手机站
千锋教育

千锋学习站 | 随时随地免费学

千锋教育

扫一扫进入千锋手机站

领取全套视频
千锋教育

关注千锋学习站小程序
随时随地免费学习课程

当前位置:首页  >  技术干货  > Python让你变美变帅的路上不走弯路

Python让你变美变帅的路上不走弯路

来源:千锋教育
发布人:wjy
时间: 2022-06-07 11:00:00 1654570800

  本篇主要给大家分享一下Scrapy爬虫框架,以及通过spider获取更多美丽小知识的文章。

美丽也是技术活(Python让你变美变帅的路上不走弯路)46

 

  上图你感兴趣吗?如果想获取更多的文章信息,就从一起写成爬虫程序开始吧!

  首先我们创建一个项目

  scrapy startproject guoke

  进入到guoke目录执行下面的命令

  scrapy genspider beauty www.guokr.com

  此时使用Pycharm打开我们的新建的guoke项目,通过分析发现果壳中美丽也是技术活更多的内容加载是通过XHR请求的json数据。

美丽也是技术活(Python让你变美变帅的路上不走弯路)245

美丽也是技术活(Python让你变美变帅的路上不走弯路)248

 

  我们不仅要获取每条json数据,还要获取每条数据的详情页。

美丽也是技术活(Python让你变美变帅的路上不走弯路)281

 

  我们不仅要获取文章的title,id,summary,url,date_created,以及要将每个详情内容保存到txt文档中。

  爬虫参考代码:

import scrapy
import json

from guoke.items import GuokeItem, GuokeDetailItem


class BeautySpider(scrapy.Spider):
    name = 'beauty'
    allowed_domains = ['www.guokr.com']
    offset = 0
    start_urls = [
        'https://www.guokr.com/apis/minisite/article.json?retrieve_type=by_wx&channel_key=pac&offset=0&limit=10']

    def parse(self, response):
        rs = json.loads(response.text)
        article_list = rs.get('result')
        for article in article_list:
            item = GuokeItem()
            item['title'] = article.get('title')
            item['id'] = article.get('id')
            url = article.get('url')
            item['url'] = url
            item['summary'] = article.get('summary')
            item['date_created'] = article.get('date_created')
            yield scrapy.Request(url, callback=self.parse_item, headers={'Referer': 'https://www.guokr.com/pretty',
                                                                         'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15'})
            yield item
        if self.offset <= 71130:
            self.offset += 10
            url = 'https://www.guokr.com/apis/minisite/article.json?retrieve_type=by_wx&channel_key=beauty&offset=' + str(
                self.offset) + '&limit=10'
            yield scrapy.Request(url, callback=self.parse)

    def parse_item(self, response):
        item = GuokeDetailItem()
        item['url'] = response.url
        all_txt = response.xpath('//p[@style="white-space: normal;"]//text()').extract()
        detail_str = ''
        for txt in all_txt:
            detail_str += txt

        item['detail'] = detail_str
        yield item

items.py参考代码:

import scrapy


class GuokeItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    id = scrapy.Field()
    title = scrapy.Field()
    url = scrapy.Field()
    summary = scrapy.Field()
    date_created = scrapy.Field()


class GuokeDetailItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    url = scrapy.Field()
    detail = scrapy.Field()

  在爬取过程中可能出现一些常见的错误代码,如下:

  Frequentresponse with HTTP404,301 or 50x errors

  (1)301 MovedTemporarily

  (2)401unauthorized

  (3)403forbidden (aAatch处理的)

  (4)404 notfound

  (5)408 requesttimeout

  (6)429 too manyrequests

  (7)503 serviceunavailable

  解决办法:

  在settings中将rebots改为False

  在settings中将DOWNLOAD_DELAY 适当设置一个时间,默认是0

  设置中间件middlewares

  具体步骤如下:

  在settings中添加USERAGENTSLIST,内容如下

 USER_AGENTS_LIST = [
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
    "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
    "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
    "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
    "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
    "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
    "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
    "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)",
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 LBBROWSER",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; QQBrowser/7.0.3698.400)",
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)",
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
    "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1",
    "Mozilla/5.0 (iPad; U; CPU OS 4_2_1 like Mac OS X; zh-cn) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5",
    "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0b13pre) Gecko/20110307 Firefox/4.0b13pre",
    "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
    "Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10"
]

  新建一个python文件,名字可以是:useragentmiddlewares.py,在里面添加如下代码

 import scrapy
from scrapy import signals
from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware
import random


class MyUserAgentMiddleware(UserAgentMiddleware):
    '''
    设置User-Agent
    '''

    def __init__(self, user_agent):
        super(MyUserAgentMiddleware, self).__init__(user_agent)
        self.user_agent = user_agent

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            user_agent=crawler.settings.get('USER_AGENTS_LIST')
        )

    def process_request(self, request, spider):
        agent = random.choice(self.user_agent)
        request.headers['User-Agent'] = agent

 c. 在settings.py中添加MyUserAgentMiddleware

  

 DOWNLOADER_MIDDLEWARES = {
   'guoke.randomAgentMiddleware.MyUserAgentMiddleware' :400,
   'guoke.middlewares.GuokeDownloaderMiddleware': 543,
}

  此时就会发现一些错误就不见了,因为爬取速度过快或者一直是同一个浏览器,会被对方识别为爬虫进行反扒处理。

pipelines.py的代码如下,主要是存储数据

useful for handling different item types with a single interface
from itemadapter import ItemAdapter
import csv

from guoke.items import GuokeItem, GuokeDetailItem


class GuokePipeline:
    def __init__(self):
        self.article_stream = open('articles.csv', 'w', newline='', encoding='utf-8')
        self.f = csv.writer(self.article_stream)

    def process_item(self, item, spider):
        if isinstance(item, GuokeItem):
            data = [item.get('id'), item.get('title'), item.get('summary'), item.get('url'), item.get('date_created')]
            self.f.writerow(data)

        elif isinstance(item, GuokeDetailItem):
            url = item['url']
            id = url[:-1].rsplit('/')[-1]
            with open(str(id) + '.txt', 'w') as stream:
                stream.write(item['detail'])

  完毕之后,我们就可以运行爬虫进行测试了。

scrapy crawl beauty

  结果展示一下:

美丽也是技术活(Python让你变美变帅的路上不走弯路)8996

 

美丽也是技术活(Python让你变美变帅的路上不走弯路)8999

 

  打开文本看一下

美丽也是技术活(Python让你变美变帅的路上不走弯路)9010

 

  当然因为在解析的时候我是把里面的换行给去掉了,所以没有一些换行的效果,大家可以再次优化一下上面的spider.py里面的代码。

更多关于python培训的问题,欢迎咨询千锋教育在线名师。千锋教育拥有多年IT培训服务经验,采用全程面授高品质、高体验培养模式,拥有国内一体化教学管理及学员服务,助力更多学员实现高薪梦想。

tags:
声明:本站稿件版权均属千锋教育所有,未经许可不得擅自转载。
10年以上业内强师集结,手把手带你蜕变精英
请您保持通讯畅通,专属学习老师24小时内将与您1V1沟通
免费领取
今日已有369人领取成功
刘同学 138****2860 刚刚成功领取
王同学 131****2015 刚刚成功领取
张同学 133****4652 刚刚成功领取
李同学 135****8607 刚刚成功领取
杨同学 132****5667 刚刚成功领取
岳同学 134****6652 刚刚成功领取
梁同学 157****2950 刚刚成功领取
刘同学 189****1015 刚刚成功领取
张同学 155****4678 刚刚成功领取
邹同学 139****2907 刚刚成功领取
董同学 138****2867 刚刚成功领取
周同学 136****3602 刚刚成功领取
相关推荐HOT