前提
機械学習初心者です。
間違っている点があればご指摘いただけるとありがたいです。
実現したいこと
CNNとLSTMを組み合わせてた、脳波の3状態分類を行いたいです。
データは[試行回数、バッチサイズ、時間、周波数、電極数]の5次元データです。
試行回数:6000
バッチサイズ:1
時間:250
周波数:29
電極:7
現状はval_accが33.3%のまま一定です。
データの入れ方に問題があるのか、モデルのつくりからに問題があるのかわかりません。
発生している問題
Model: "sequential_13" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= time_distributed_48 (TimeDis (None, 1, 250, 29, 64) 4096 _________________________________________________________________ time_distributed_49 (TimeDis (None, 1, 125, 14, 64) 0 _________________________________________________________________ time_distributed_50 (TimeDis (None, 1, 125, 14, 64) 36928 _________________________________________________________________ time_distributed_51 (TimeDis (None, 1, 62, 7, 64) 0 _________________________________________________________________ time_distributed_52 (TimeDis (None, 1, 27776) 0 _________________________________________________________________ lstm_10 (LSTM) (None, 128) 14287360 _________________________________________________________________ dense_9 (Dense) (None, 3) 387 ================================================================= Total params: 14,328,771 Trainable params: 14,328,771 Non-trainable params: 0 _________________________________________________________________ Epoch 1/1000 34/34 [==============================] - 15s 455ms/step - loss: 1.1557 - accuracy: 0.3375 - val_loss: 1.1155 - val_accuracy: 0.3333 Epoch 2/1000 34/34 [==============================] - 15s 442ms/step - loss: 1.1020 - accuracy: 0.3298 - val_loss: 1.0996 - val_accuracy: 0.3333 Epoch 3/1000 34/34 [==============================] - 15s 451ms/step - loss: 1.0989 - accuracy: 0.3405 - val_loss: 1.0996 - val_accuracy: 0.3333 Epoch 4/1000 34/34 [==============================] - 15s 453ms/step - loss: 1.1001 - accuracy: 0.3330 - val_loss: 1.1003 - val_accuracy: 0.3333 Epoch 5/1000 34/34 [==============================] - 15s 454ms/step - loss: 1.0993 - accuracy: 0.3420 - val_loss: 1.1009 - val_accuracy: 0.3333 Epoch 6/1000 34/34 [==============================] - 16s 468ms/step - loss: 1.1004 - accuracy: 0.3336 - val_loss: 1.0998 - val_accuracy: 0.3333 Epoch 7/1000 34/34 [==============================] - 17s 487ms/step - loss: 1.0994 - accuracy: 0.3238 - val_loss: 1.0986 - val_accuracy: 0.3333 Epoch 8/1000 34/34 [==============================] - 15s 452ms/step - loss: 1.0993 - accuracy: 0.3253 - val_loss: 1.0990 - val_accuracy: 0.3333 Epoch 9/1000 34/34 [==============================] - 15s 455ms/step - loss: 1.0990 - accuracy: 0.3420 - val_loss: 1.0995 - val_accuracy: 0.3333 Epoch 10/1000 34/34 [==============================] - 16s 459ms/step - loss: 1.1002 - accuracy: 0.3187 - val_loss: 1.0988 - val_accuracy: 0.3333 Epoch 11/1000 34/34 [==============================] - 16s 457ms/step - loss: 1.0998 - accuracy: 0.3241 - val_loss: 1.0991 - val_accuracy: 0.3333 Epoch 12/1000 34/34 [==============================] - 16s 458ms/step - loss: 1.0996 - accuracy: 0.3262 - val_loss: 1.0996 - val_accuracy: 0.3333 Epoch 00012: early stopping 0.3333333333333333 test_loss: 1.100, test_acc: 0.333
該当のソースコード
data = np.load('data.npy') ans_1 = np.array([1]*2000) ans_2 = np.array([2]*2000) ans_0 = np.array([0]*2000) y_ans = np.append(ans_r, [ans_l, ans_n]) X_train = np.reshape(data, [6000, 1, 250, 29, 7]) y_train = y_ans X_train, X_test, y_train, y_test = train_test_split(X_train, y_train, test_size=0.3,random_state=1,stratify=y_train) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2,random_state=1,stratify=y_train) model = Sequential() model.add(TimeDistributed(Conv2D(64,(3,3), padding='same', activation='relu'), input_shape=(1, 250, 29, 7))) model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2)))) model.add(TimeDistributed(Conv2D(64,(3,3),padding='same',activation='relu'))) model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2)))) model.add(TimeDistributed(Flatten())) model.add(LSTM(units=128, return_sequences=False)) model.add(Dense(3, kernel_initializer='glorot_normal', activation='softmax')) model.summary() optimizer = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, amsgrad=True) model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) es = EarlyStopping(monitor='val_loss', patience=5, verbose=1) hist = model.fit(X_train, y_train, epochs=1000, batch_size=100, verbose=1, validation_data=(X_val, y_val), callbacks=[es]) loss, acc = model.evaluate(X_test, y_test, verbose=0) print('test_loss: {:.3f}, test_acc: {:.3f}'.format( loss, acc ))
試したこと
kaggleで書かれているプログラムを参考に書いたのですが、うまくいきませんでした。

回答1件
あなたの回答
tips
プレビュー