某サイトデータをMongoDBに保存するのにpython3.6でScrapyを使っています。
scrapy crawl corporate を実行するとエラーになります。
エラーが読み取れないのですが、pymongo.errors とあります。
pymongo関係のエラーでしょうか?
お分かりになられましたら、教えてください。よろしくお願いいたします。
環境 Windows10
python
1(C:\Users\@@@\Anaconda3) C:\Users\@@@\@@@>scrapy crawl corporate (実行) 2 32017-12-31 09:37:49 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: @@@) 42017-12-31 09:37:49 [scrapy.utils.log] INFO: Versions: lxml 4.1.0.0, libxml2 2.9.4, cssselect 1.0.3, parsel 1.3.1, w3lib 1.18.0, Twisted 17.9.0, Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 15 2017, 03:27:45) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 17.2.0 (OpenSSL 1.0.2l 25 May 2017), cryptography 2.0.3, Platform Windows-10-10.0.16299-SP0 52017-12-31 09:37:49 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'shikiho', 'DOWNLOAD_DELAY': 10, 'NEWSPIDER_MODULE': '@@@.spiders', 'SPIDER_MODULES': ['@@@.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'} 62017-12-31 09:37:50 [scrapy.middleware] INFO: Enabled extensions: 7['scrapy.extensions.corestats.CoreStats', 8 'scrapy.extensions.telnet.TelnetConsole', 9 'scrapy.extensions.logstats.LogStats'] 102017-12-31 09:37:50 [scrapy.middleware] INFO: Enabled downloader middlewares: 11['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 12 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 13 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 14 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 15 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 16 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 17 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 18 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 19 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 20 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 21 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 222017-12-31 09:37:50 [scrapy.middleware] INFO: Enabled spider middlewares: 23['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 24 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 25 'scrapy.spidermiddlewares.referer.RefererMiddleware', 26 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 27 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 282017-12-31 09:37:50 [scrapy.middleware] INFO: Enabled item pipelines: 29['@@@.pipelines.CorporatePipeline'] 302017-12-31 09:37:50 [scrapy.core.engine] INFO: Spider opened 312017-12-31 09:37:50 [scrapy.core.engine] INFO: Closing spider (shutdown) 322017-12-31 09:37:50 [root] INFO: 2017-12-31 09:37:50 MongoDBとの接続を終了 332017-12-31 09:37:50 [scrapy.statscollectors] INFO: Dumping Scrapy stats: 34{'finish_reason': 'shutdown', 35 'finish_time': datetime.datetime(2017, 12, 31, 0, 37, 50, 193514), 36 'log_count/INFO': 7} 372017-12-31 09:37:50 [scrapy.core.engine] INFO: Spider closed (shutdown) 38Unhandled error in Deferred: 392017-12-31 09:37:50 [twisted] CRITICAL: Unhandled error in Deferred: 40 412017-12-31 09:37:50 [twisted] CRITICAL: 42Traceback (most recent call last): 43 File "c:\users\@@@\anaconda3\lib\site-packages\twisted\internet\defer.py", line 1386, in _inlineCallbacks 44 result = g.send(result) 45 File "c:\users\@@@\anaconda3\lib\site-packages\scrapy\crawler.py", line 82, in crawl 46 yield self.engine.open_spider(self.spider, start_requests) 47pymongo.errors.OperationFailure: Authentication failed. 48 49