質問編集履歴

2

モデルの場所を追記しました

2023/01/09 01:53

投稿

Mizuiro_sakura
Mizuiro_sakura

スコア5

test CHANGED
File without changes
test CHANGED
@@ -10,6 +10,8 @@
10
10
  transformersのpipeline機能でquestion-answeringを行いたいと思っています。
11
11
  使うモデルはstudio-ousia/Luke-japanese-base-liteと
12
12
  studio-ousia/luke-japanese-base-liteをDDQAデータセット(運転ドメインQAデータセット)を用いてfinetuningしたモデルであるMy_luke_model_squad.pthです。
13
+
14
+ My_luke_model_squad.pthはここ( https://huggingface.co/Mizuiro-sakura/luke-japanese-finetuned-question-answering/tree/main )に置いてあります
13
15
 
14
16
  ### 発生している問題・エラーメッセージ
15
17
 

1

エラーに関して全文を載せました

2023/01/07 07:15

投稿

Mizuiro_sakura
Mizuiro_sakura

スコア5

test CHANGED
File without changes
test CHANGED
@@ -14,6 +14,49 @@
14
14
  ### 発生している問題・エラーメッセージ
15
15
 
16
16
  ```
17
+ multiprocessing.pool.RemoteTraceback:
18
+ """
19
+ Traceback (most recent call last):
20
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 125, in worker
21
+ result = (True, func(*args, **kwds))
22
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 48, in mapstar
23
+ return list(map(*args))
24
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\data\processors\squad.py", line 180, in squad_convert_example_to_features
25
+ encoded_dict = tokenizer.encode_plus( # TODO(thom) update this logic
26
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\tokenization_utils_base.py", line 2667, in encode_plus
27
+ return self._encode_plus(
28
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\models\mluke\tokenization_mluke.py", line 559, in _encode_plus
29
+ ) = self._create_input_sequence(
30
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\models\mluke\tokenization_mluke.py", line 775, in _create_input_sequence
31
+ first_ids = get_input_ids(text)
32
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\models\mluke\tokenization_mluke.py", line 735, in get_input_ids
33
+ tokens = self.tokenize(text, **kwargs)
34
+ File "C:\Userst\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\tokenization_utils.py", line 520, in tokenize
35
+ if token in no_split_token:
36
+ TypeError: unhashable type: 'list'
37
+ """
38
+
39
+ The above exception was the direct cause of the following exception:
40
+
41
+ Traceback (most recent call last):
42
+ File "C:\Users\desktop\Python\luke_squad.py", line 15, in <module>
43
+ result=qa(text)
44
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\pipelines\question_answering.py", line 380, in __call__
45
+ return super().__call__(examples[0], **kwargs)
46
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\pipelines\base.py", line 1074, in __call__
47
+ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
48
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\pipelines\base.py", line 1095, in run_single
49
+ for model_inputs in self.preprocess(inputs, **preprocess_params):
50
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\pipelines\question_answering.py", line 396, in preprocess
51
+ features = squad_convert_examples_to_features(
52
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\transformers\data\processors\squad.py", line 377, in squad_convert_examples_to_features
53
+ features = list(
54
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1111, in __iter__
55
+ for obj in iterable:
56
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 420, in <genexpr>
57
+ return (item for chunk in result for item in chunk)
58
+ File "C:\Users\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 870, in next
59
+ raise value
17
60
  TypeError: unhashable type: 'list'
18
61
  ```
19
62