Scrapyを利用して、パリ市交通局のデータサイトからデータセット概要のリストを制作したいと考えています。しかし、以下に示すエラーが出てきてしまい、うまく進みません。どなたかご教示願います。
エラーの内容は、大きく分けて2種類と推察しています。
- Spider error processing <GET https://data.ratp.fr/explore/?sort=modified> (referer: None)
- Post.callback is not defined
参考にしているサイトはこちらです。
python3
1#filename:items.py 2# -*- coding: utf-8 -*- 3import scrapy 4 5class Post(scrapy.Item): 6 url = scrapy.Field() 7 title = scrapy.Field() 8 description = scrapy.Field() 9
python3
1#filename:scrapy_blog_spider2.py 2# -*- coding: utf-8 -*- 3import scrapy 4from ten_min_scrapy.items import Post 5 6class Post(scrapy.Spider): 7 name = 'scrapy_blog_spider2' 8 allowed_domains = ['data.ratp.fr/explore/'] 9 start_urls = ['https://data.ratp.fr/explore/?sort=modified'] 10 11 12 13def parse(self, response): 14 """ 15 レスポンスに対するパース処理 16 """ 17 # response.css で scrapy デフォルトの css セレクタを利用できる 18 for post in response.css('.ods-catalog-card__body'): 19 # items に定義した Post のオブジェクトを生成して次の処理へ渡す 20 yield Post( 21 url=post.css('ods-catalog-card-title a::attr(href)').extract_first().strip(), 22 title=post.css('ods-catalog-card-title a::text').extract_first().strip(), 23 description=post.css('ods-catalog-card-description p::text').extract_first().strip(), 24 ) 25
$ scrapy crawl scrapy_blog_spider2
response
12020-02-13 16:47:07 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: ten_min_scrapy) 22020-02-13 16:47:07 [scrapy.utils.log] INFO: Versions: lxml 4.3.2.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.3 (default, Mar 27 2019, 16:54:48) - [Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Darwin-18.5.0-x86_64-i386-64bit 32020-02-13 16:47:07 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'ten_min_scrapy', 'DOWNLOAD_DELAY': 3, 'HTTPCACHE_ENABLED': True, 'NEWSPIDER_MODULE': 'ten_min_scrapy.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['ten_min_scrapy.spiders']} 42020-02-13 16:47:07 [scrapy.extensions.telnet] INFO: Telnet Password: 1389f773032239f8 52020-02-13 16:47:07 [scrapy.middleware] INFO: Enabled extensions: 6['scrapy.extensions.corestats.CoreStats', 7 'scrapy.extensions.telnet.TelnetConsole', 8 'scrapy.extensions.memusage.MemoryUsage', 9 'scrapy.extensions.logstats.LogStats'] 102020-02-13 16:47:07 [scrapy.middleware] INFO: Enabled downloader middlewares: 11['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 12 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 13 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 14 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 15 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 16 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 17 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 18 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 19 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 20 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 21 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 22 'scrapy.downloadermiddlewares.stats.DownloaderStats', 23 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 242020-02-13 16:47:07 [scrapy.middleware] INFO: Enabled spider middlewares: 25['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 26 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 27 'scrapy.spidermiddlewares.referer.RefererMiddleware', 28 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 29 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 302020-02-13 16:47:07 [scrapy.middleware] INFO: Enabled item pipelines: 31[] 322020-02-13 16:47:07 [scrapy.core.engine] INFO: Spider opened 332020-02-13 16:47:07 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 342020-02-13 16:47:07 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache storage in /Users/u.nagata/ten_min_scrapy/.scrapy/httpcache 352020-02-13 16:47:07 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 362020-02-13 16:47:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://data.ratp.fr/robots.txt> (referer: None) ['cached'] 372020-02-13 16:47:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://data.ratp.fr/explore/?sort=modified> (referer: None) ['cached'] 382020-02-13 16:47:07 [scrapy.core.scraper] ERROR: Spider error processing <GET https://data.ratp.fr/explore/?sort=modified> (referer: None) 39Traceback (most recent call last): 40 File "/anaconda3/lib/python3.7/site-packages/twisted/internet/defer.py", line 654, in _runCallbacks 41 current.result = callback(current.result, *args, **kw) 42 File "/anaconda3/lib/python3.7/site-packages/scrapy/spiders/__init__.py", line 80, in parse 43 raise NotImplementedError('{}.parse callback is not defined'.format(self.__class__.__name__)) 44NotImplementedError: Post.parse callback is not defined 452020-02-13 16:47:07 [scrapy.core.engine] INFO: Closing spider (finished) 462020-02-13 16:47:07 [scrapy.statscollectors] INFO: Dumping Scrapy stats: 47{'downloader/request_bytes': 454, 48 'downloader/request_count': 2, 49 'downloader/request_method_count/GET': 2, 50 'downloader/response_bytes': 6178, 51 'downloader/response_count': 2, 52 'downloader/response_status_count/200': 2, 53 'elapsed_time_seconds': 0.228215, 54 'finish_reason': 'finished', 55 'finish_time': datetime.datetime(2020, 2, 13, 7, 47, 7, 553588), 56 'httpcache/hit': 2, 57 'log_count/DEBUG': 3, 58 'log_count/ERROR': 1, 59 'log_count/INFO': 10, 60 'memusage/max': 52031488, 61 'memusage/startup': 52031488, 62 'response_received_count': 2, 63 'robotstxt/request_count': 1, 64 'robotstxt/response_count': 1, 65 'robotstxt/response_status_count/200': 1, 66 'scheduler/dequeued': 1, 67 'scheduler/dequeued/memory': 1, 68 'scheduler/enqueued': 1, 69 'scheduler/enqueued/memory': 1, 70 'spider_exceptions/NotImplementedError': 1, 71 'start_time': datetime.datetime(2020, 2, 13, 7, 47, 7, 325373)}
回答1件
あなたの回答
tips
プレビュー
バッドをするには、ログインかつ
こちらの条件を満たす必要があります。
2020/02/13 11:43
2020/02/14 01:38