初めまして。
今日から画像認識について学ぼうと思い、ラズベリーパイ4とカメラモジュールを使ってとりあえず画像認識を体験しようと思いこちらの記事を参考に同じように実行しました。
しかし実行する最終段階のところで以下のようなコマンドを打つとエラーが出てしまいました。
bash
1python3 classify_picamera.py \ 2--model /tmp/mobilenet_v1_1.0_224_quant_edgetpu.tflite \ 3--labels /tmp/labels_mobilenet_quant_v1_224.txt
エラー内容
bash
1INFO: Initialized TensorFlow Lite runtime. 2Traceback (most recent call last): 3 File "classify_picamera.py", line 99, in <module> 4 main() 5 File "classify_picamera.py", line 74, in main 6 interpreter.allocate_tensors() 7 File "/home/pi/.local/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 244, in allocate_tensors 8 return self._interpreter.AllocateTensors() 9 File "/home/pi/.local/lib/python3.7/site-packages/tflite_runtime/interpreter_wrapper.py", line 111, in AllocateTensors 10 return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self) 11RuntimeError: Encountered unresolved custom op: edgetpu-custom-op.Node number 0 (edgetpu-custom-op) failed to prepare.
この記事と私の環境の違いはおそらくアクセラレーター?というものがついているかどうかだと思います。
edgetpuが関係しているのかなとは思うのですがとりあえず体験してみたいと思って真似してみたので正直どこを直したらいいかわからないです。
回答お待ちしております。よろしくお願いします。
直すべき部分が入っていると思うコード
python
1# python3 2# 3# Copyright 2019 The TensorFlow Authors. All Rights Reserved. 4# 5# Licensed under the Apache License, Version 2.0 (the "License"); 6# you may not use this file except in compliance with the License. 7# You may obtain a copy of the License at 8# 9# https://www.apache.org/licenses/LICENSE-2.0 10# 11# Unless required by applicable law or agreed to in writing, software 12# distributed under the License is distributed on an "AS IS" BASIS, 13# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14# See the License for the specific language governing permissions and 15# limitations under the License. 16"""Example using TF Lite to classify objects with the Raspberry Pi camera.""" 17 18from __future__ import absolute_import 19from __future__ import division 20from __future__ import print_function 21 22import argparse 23import io 24import time 25import numpy as np 26import picamera 27 28from PIL import Image 29from tflite_runtime.interpreter import Interpreter 30 31 32def load_labels(path): 33 with open(path, 'r') as f: 34 return {i: line.strip() for i, line in enumerate(f.readlines())} 35 36 37def set_input_tensor(interpreter, image): 38 tensor_index = interpreter.get_input_details()[0]['index'] 39 input_tensor = interpreter.tensor(tensor_index)()[0] 40 input_tensor[:, :] = image 41 42 43def classify_image(interpreter, image, top_k=1): 44 """Returns a sorted array of classification results.""" 45 set_input_tensor(interpreter, image) 46 interpreter.invoke() 47 output_details = interpreter.get_output_details()[0] 48 output = np.squeeze(interpreter.get_tensor(output_details['index'])) 49 50 # If the model is quantized (uint8 data), then dequantize the results 51 if output_details['dtype'] == np.uint8: 52 scale, zero_point = output_details['quantization'] 53 output = scale * (output - zero_point) 54 55 ordered = np.argpartition(-output, top_k) 56 return [(i, output[i]) for i in ordered[:top_k]] 57 58 59def main(): 60 parser = argparse.ArgumentParser( 61 formatter_class=argparse.ArgumentDefaultsHelpFormatter) 62 parser.add_argument( 63 '--model', help='File path of .tflite file.', required=True) 64 parser.add_argument( 65 '--labels', help='File path of labels file.', required=True) 66 args = parser.parse_args() 67 68 labels = load_labels(args.labels) 69 70 interpreter = Interpreter(args.model) 71 interpreter.allocate_tensors() 72 _, height, width, _ = interpreter.get_input_details()[0]['shape'] 73 74 with picamera.PiCamera(resolution=(640, 480), framerate=30) as camera: 75 camera.start_preview() 76 try: 77 stream = io.BytesIO() 78 for _ in camera.capture_continuous( 79 stream, format='jpeg', use_video_port=True): 80 stream.seek(0) 81 image = Image.open(stream).convert('RGB').resize((width, height), 82 Image.ANTIALIAS) 83 start_time = time.time() 84 results = classify_image(interpreter, image) 85 elapsed_ms = (time.time() - start_time) * 1000 86 label_id, prob = results[0] 87 stream.seek(0) 88 stream.truncate() 89 camera.annotate_text = '%s %.2f\n%.1fms' % (labels[label_id], prob, 90 elapsed_ms) 91 finally: 92 camera.stop_preview() 93 94 95if __name__ == '__main__': 96 main() 97
実行したコマンド
bash
1python3 classify_picamera.py \ 2--model /tmp/mobilenet_v1_1.0_224_quant_edgetpu.tflite \ 3--labels /tmp/labels_mobilenet_quant_v1_224.txt
回答1件
あなたの回答
tips
プレビュー