前提・実現したいこと
pythonで、ファッションサイトから情報をスクレイピングするシステムを作っています。
bs4オブジェクトから名前を取得中にエラーが起きていると思うんですが。。。
修正するべき場所がわからず。。。力をお貸しいただければと思います。
実行コマンドは
$ python main.py
です。
発生している問題・エラーメッセージ
'''PS C:\brandrecommend\brand_recommend> python main.py
C:\brandrecommend\brand_recommend\scrayping.py:17: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("lxml"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 17 of the file C:\brandrecommend\brand_recommend\scrayping.py. To get rid of this warning, pass the additional argument 'features="lxml"' to the BeautifulSoup constructor.
return BeautifulSoup(res.text)
Traceback (most recent call last):
File "main.py", line 19, in <module>
NameList.extend(getNameList(snaps))
File "C:\brandrecommend\brand_recommend\scrayping.py", line 33, in getNameList
NameList.append(snap.find('a')['title'])
TypeError: 'NoneType' object is not subscriptable'''
###該当のソースコード
python
1scrayping.py 2 3# -*- coding: utf-8 -*- 4 5import re 6 7import numpy as np 8from bs4 import BeautifulSoup 9import requests 10import pandas as pd 11import collections 12 13# URLからbs4オブジェクトを生成 14def openURL(URL: int): 15 res = requests.get(URL) 16 if res.status_code != requests.codes.ok: 17 print('Error') 18 return(False) 19 return BeautifulSoup(res.text) 20 21 22# bs4オブジェクトから着用ブランドのリストを取得 23def getBrandList(snaps): 24 BrandList = [] 25 for snap in snaps: 26 BrandList.append(list(set(map(lambda x: x.text, snap.find_all( 27 'a', attrs={'href': re.compile('^/snaps/brand')}))))) 28 return BrandList 29 30 31# bs4オブジェクトから名前を取得 32def getNameList(snaps): 33 NameList = [] 34 for snap in snaps: 35 NameList.append(snap.find('a')['title']) 36 return NameList 37 38 39def getuniqueList(df): 40 uniqueList = [] 41 42 for columns in df.columns.values: 43 uniqueList.extend(df[columns]) 44 45 return collections.Counter(uniqueList)
python
1main.py 2 3from scrayping import * 4from predict import * 5from predict2 import * 6 7if __name__ == '__main__': 8 flag = True 9 page = 1 10 BrandList = [] 11 NameList = [] 12 13 while flag: 14 URL = 'https://www.fashion-press.net/snaps/sex/mens?page=' + str(page) 15 soup = openURL(URL) 16 snaps = soup.find_all(attrs={'class': 'fp_media_tile'}) 17 18 if len(snaps) != 0: # 写真がある場合はブランドを取得 19 tmpBrandList = getBrandList(snaps) 20 BrandList.extend(tmpBrandList) 21 NameList.extend(getNameList(snaps)) 22 print('get page' + str(page)) 23 page += 1 24 25 else: # 写真がない場合は終了 26 flag = False 27 print('END') 28 29 df = pd.DataFrame(data=BrandList, index=NameList) # pandasのDataFrame型に 30 df.to_csv('StreetSnapMen.csv') 31 # df = pd.read_csv('StreetSnapMen.csv', index_col = 0) 32 33 brand = getuniqueList(df) 34 brand_df = pd.DataFrame(index=list(brand.keys()), 35 data=list(brand.values())) 36 brand_df = brand_df.drop(np.nan) # NAN を削除 37
python
1predict.py 2 3# -*- coding: utf-8 -*- 4 5import re 6 7import numpy as np 8from bs4 import BeautifulSoup 9import requests 10import pandas as pd 11import collections 12 13 14def bays(model, A, B=None): 15 pB = (model == B).sum().sum() 16 pAB = (((model == A) | (model == B)).sum(axis=1) > 1).sum() 17 return pAB / pB 18 19 20def predict(df, brand_df, wear, k=3): 21 prob = [] 22 23 for brand in brand_df.index: 24 prob.append(bays(df, brand, wear)) 25 26 best_k = sorted(range(len(prob)), key=lambda i: prob[i], reverse=True)[:k] 27 return list(map(lambda k: brand_df.index[k], best_k))
試したこと
補足情報(FW/ツールのバージョンなど)
windows10
python 3.8.2
回答3件
あなたの回答
tips
プレビュー