前提・実現したいこと
Pythonで画像認識を用いてじゃんけんの手の形を判別するシステムをあるサイトで見つけて
サイトの通りに実行したところエラーが発生しました
■■な機能を実装中に以下のエラーメッセージが発生しました。
発生している問題・エラーメッセージ
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-6c44726a183e> in <module>() ----> 1 model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['acc']) C:\Anaconda3\envs\Sotsuken\lib\site-packages\keras\engine\training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs) 227 # loss_weight_2 * output_2_loss_fn(...) + 228 # layer losses. --> 229 self.total_loss = self._prepare_total_loss(masks) 230 231 # Functions for train, test and predict will C:\Anaconda3\envs\Sotsuken\lib\site-packages\keras\engine\training.py in _prepare_total_loss(self, masks) 690 691 output_loss = loss_fn( --> 692 y_true, y_pred, sample_weight=sample_weight) 693 694 if len(self.outputs) > 1: C:\Anaconda3\envs\Sotsuken\lib\site-packages\keras\losses.py in __call__(self, y_true, y_pred, sample_weight) 71 losses = self.call(y_true, y_pred) 72 return losses_utils.compute_weighted_loss( ---> 73 losses, sample_weight, reduction=self.reduction) 74 75 @classmethod C:\Anaconda3\envs\Sotsuken\lib\site-packages\keras\utils\losses_utils.py in compute_weighted_loss(losses, sample_weight, reduction, name) 164 # Update dimensions of `sample_weight` to match with `losses` if possible. 165 losses, _, sample_weight = squeeze_or_expand_dimensions( --> 166 losses, None, sample_weight) 167 168 # Broadcast weights if possible. C:\Anaconda3\envs\Sotsuken\lib\site-packages\keras\utils\losses_utils.py in squeeze_or_expand_dimensions(y_pred, y_true, sample_weight) 74 if y_pred_rank == 0 and weights_rank == 1: 75 y_pred = K.expand_dims(y_pred, -1) ---> 76 elif weights_rank - y_pred_rank == 1: 77 sample_weight = K.squeeze(sample_weight, -1) 78 elif y_pred_rank - weights_rank == 1: TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'
該当のソースコード
Python
1import keras 2from keras import layers 3from keras import models 4from keras import optimizers 5from keras.preprocessing.image import ImageDataGenerator 6import matplotlib.pyplot as plt 7import tensorflowjs as tfjs 8 9model = models.Sequential() 10model.add(layers.Conv2D(32, (3, 3), activation='relu', 11 input_shape=(100, 100, 3))) 12model.add(layers.MaxPooling2D((2, 2))) 13model.add(layers.Conv2D(64, (3, 3), activation='relu')) 14model.add(layers.MaxPooling2D((2, 2))) 15model.add(layers.Conv2D(128, (3, 3), activation='relu')) 16model.add(layers.MaxPooling2D((2, 2))) 17model.add(layers.Conv2D(128, (3, 3), activation='relu')) 18model.add(layers.MaxPooling2D((2, 2))) 19model.add(layers.Flatten()) 20model.add(layers.Dropout(0.5)) 21model.add(layers.Dense(512, activation='relu')) 22model.add(layers.Dense(10, activation='softmax')) 23 24model.compile(loss='categorical_crossentropy', 25 optimizer='adam',metrics=['acc']) 26 27classes = ['zero', 'one', 'two', 'three', 'four', 28 'five', 'seven', 'eight', 'nine'] 29 30train_dir = 'hand_sign_digit_data/train' 31validation_dir = 'hand_sign_digit_data/validation' 32 33train_datagen = ImageDataGenerator( 34 rescale=1./255, 35 rotation_range=40, 36 width_shift_range=0.2, 37 height_shift_range=0.2, 38 shear_range=0.2, 39 zoom_range=0.2, 40 horizontal_flip=True,) 41 42test_datagen = ImageDataGenerator(rescale=1./255) 43 44train_generator = train_datagen.flow_from_directory( 45 train_dir, 46 target_size=(100, 100), 47 batch_size=32, 48 class_mode='categorical') 49 50validation_generator = test_datagen.flow_from_directory( 51 validation_dir, 52 target_size=(100, 100), 53 batch_size=32, 54 class_mode='categorical') 55 56history = model.fit_generator( 57 train_generator, 58 steps_per_epoch=100, 59 epochs=100, 60 validation_data=validation_generator, 61 validation_steps=10) 62 63model.save('sign_language_vgg16_1.h5') 64 65#convert the vgg16 model into tf.js model 66save_path = '../nodejs/static/sign_language_vgg16' 67tfjs.converters.save_keras_model(model, save_path) 68print("[INFO] saved tf.js vgg16 model to disk..") 69 70acc = history.history['acc'] 71val_acc = history.history['val_acc'] 72loss = history.history['loss'] 73val_loss = history.history['val_loss'] 74 75epochs = range(len(acc)) 76 77plt.plot(epochs, acc, 'bo', label='Training acc') 78plt.plot(epochs, val_acc, 'b', label='Validation acc') 79plt.title('Training and validation accuracy') 80plt.legend() 81 82plt.figure() 83 84plt.plot(epochs, loss, 'bo', label='Training loss') 85plt.plot(epochs, val_loss, 'b', label='Validation loss') 86plt.title('Training and validation loss') 87plt.legend() 88 89plt.show() 90
試したこと
ネット検索で1時間ほど調べたりしましたが、該当するサイトは見つからず、そのステップの前の指の本数を判別するシステムを動かしてみたところ、実行できました。
補足情報(FW/ツールのバージョンなど)
こちらが参考にしたサイトですが、そのまま動かそうとしてもmodel.compileでエラーが発生します。
https://book.mynavi.jp/manatee/detail/id=99768
あなたの回答
tips
プレビュー