質問編集履歴
2
結果の変更・層数の変更
title
CHANGED
File without changes
|
body
CHANGED
@@ -5,22 +5,24 @@
|
|
5
5
|
kerasを用いて画像を用いた回帰分析をしています.
|
6
6
|
以下のコードでは誤差が下がりません.
|
7
7
|
結果はこんな感じです.
|
8
|
-
Epoch 303/500
|
9
|
-
400/400 [==============================] - 1s 4ms/step - loss: 100.0000 - val_loss: 100.0000
|
10
|
-
Epoch 304/500
|
11
|
-
400/400 [==============================] - 1s 4ms/step - loss: 100.0000 - val_loss: 100.0000
|
12
|
-
Epoch 305/500
|
13
|
-
400/400 [==============================] - 1s 3ms/step - loss: 100.0000 - val_loss: 100.0000
|
14
|
-
Epoch 306/500
|
15
|
-
224/400 [===============>..............] - ETA: 0s - loss: 100.0000
|
16
8
|
|
9
|
+
Epoch 84/500
|
10
|
+
1600/1600 [==============================] - 5s 3ms/step - loss: 14.8227 - val_loss: 5.6889
|
11
|
+
Epoch 85/500
|
12
|
+
1600/1600 [==============================] - 5s 3ms/step - loss: 15.6330 - val_loss: 6.1703
|
13
|
+
Epoch 86/500
|
14
|
+
1600/1600 [==============================] - 5s 3ms/step - loss: 15.7420 - val_loss: 6.5914
|
15
|
+
Epoch 87/500
|
16
|
+
1600/1600 [==============================] - 5s 3ms/step - loss: 15.3729 - val_loss: 3.6529
|
17
17
|
|
18
|
+
|
18
19
|
原因が分かる方は宜しくお願い致します.
|
19
20
|
今回用いているのは
|
20
21
|
活性化関数:LeakyReLU
|
21
22
|
損失関数:相対誤差
|
22
23
|
|
23
|
-
```ここに言語を入力
|
24
|
+
```ここに言語を入力
|
25
|
+
#最大応力の値の予測
|
24
26
|
from keras.models import Sequential
|
25
27
|
from keras.layers import Activation, Dense, Dropout, LeakyReLU
|
26
28
|
#from keras.layers.advanced_activations import LeakyReLU
|
@@ -40,14 +42,10 @@
|
|
40
42
|
|
41
43
|
start_time = time.time()
|
42
44
|
print("開始時刻: " + str(start_time))
|
43
|
-
|
44
|
-
|
45
45
|
#それぞれの画像の枚数を入力
|
46
|
-
A =
|
46
|
+
A = 1000
|
47
|
-
B =
|
47
|
+
B = 1000
|
48
48
|
sum =A+B
|
49
|
-
|
50
|
-
|
51
49
|
# 学習用のデータを作る.
|
52
50
|
image_list = []
|
53
51
|
location_list = []
|
@@ -61,7 +59,7 @@
|
|
61
59
|
#バッチサイズ
|
62
60
|
BATCH_SIZE = 32
|
63
61
|
#学習率
|
64
|
-
LR = 0.
|
62
|
+
LR = 0.00001
|
65
63
|
#訓練データの数 train=sum
|
66
64
|
train=sum
|
67
65
|
|
@@ -118,42 +116,25 @@
|
|
118
116
|
# モデルを生成してニューラルネットを構築
|
119
117
|
model = Sequential()
|
120
118
|
|
121
|
-
model.add(Dense(
|
119
|
+
model.add(Dense(8000, input_dim=Z,kernel_initializer='random_uniform',bias_initializer='zeros'))
|
122
120
|
#model.add(Activation("LeakyReLU"))
|
123
121
|
model.add(LeakyReLU())
|
124
122
|
model.add(Dropout(0.5))
|
125
123
|
|
126
|
-
model.add(Dense(5000,kernel_initializer='random_uniform',bias_initializer='zeros'))
|
127
|
-
model.add(LeakyReLU())
|
128
|
-
model.add(Dropout(0.5))
|
129
|
-
|
130
|
-
model.add(Dense(2000,kernel_initializer='random_uniform',bias_initializer='zeros'))
|
131
|
-
model.add(LeakyReLU())
|
132
|
-
model.add(Dropout(0.5))
|
133
|
-
|
134
|
-
model.add(Dense(1000,kernel_initializer='random_uniform',bias_initializer='zeros'))
|
135
|
-
model.add(LeakyReLU())
|
136
|
-
model.add(Dropout(0.5))
|
137
|
-
|
138
|
-
model.add(Dense(500,kernel_initializer='random_uniform',bias_initializer='zeros'))
|
139
|
-
model.add(LeakyReLU())
|
140
|
-
model.add(Dropout(0.5))
|
141
|
-
|
142
124
|
model.add(Dense(100,kernel_initializer='random_uniform',bias_initializer='zeros'))
|
143
125
|
model.add(LeakyReLU())
|
144
|
-
model.add(Dropout(0.
|
126
|
+
model.add(Dropout(0.2))
|
145
127
|
|
146
128
|
model.add(Dense(50,kernel_initializer='random_uniform',bias_initializer='zeros'))
|
147
129
|
model.add(LeakyReLU())
|
148
130
|
model.add(Dropout(0.2))
|
149
131
|
|
150
|
-
model.add(Dense(
|
132
|
+
model.add(Dense(10,kernel_initializer='random_uniform',bias_initializer='zeros'))
|
151
133
|
model.add(LeakyReLU())
|
152
134
|
model.add(Dropout(0.2))
|
153
135
|
|
154
|
-
|
155
136
|
model.add(Dense(1))
|
156
|
-
model.add(Activation("
|
137
|
+
model.add(Activation("linear"))
|
157
138
|
|
158
139
|
# オプティマイザ(最適化)にAdamを使用
|
159
140
|
opt = Adam(lr=LR)
|
@@ -186,19 +167,16 @@
|
|
186
167
|
#print(predicted)
|
187
168
|
np.savetxt("result/max_stress_value_predict_result.csv",predicted,delimiter=",")
|
188
169
|
|
189
|
-
|
190
|
-
|
191
170
|
end_time = time.time()
|
192
171
|
print("\n終了時刻: ",end_time)
|
193
172
|
print ("かかった時間: ", (end_time - start_time))
|
194
|
-
|
195
|
-
|
196
173
|
ttime = end_time - start_time
|
197
174
|
fa = open("result/TIME.txt","w")
|
198
175
|
fa.write("\nかかった時間:{} ".format(ttime))
|
199
176
|
fa.close()
|
200
177
|
|
201
178
|
|
179
|
+
|
202
180
|
```
|
203
181
|
|
204
182
|
追記
|
1
入力データの追記・出力層の活性化関数の変更
title
CHANGED
File without changes
|
body
CHANGED
@@ -1,7 +1,9 @@
|
|
1
|
+
](331918f553351c8eab81c787a0a6fef9.png)
|
2
|
+
今回の入力データです.二値化しています.
|
3
|
+
|
1
4
|
### 誤差が減らない
|
2
5
|
kerasを用いて画像を用いた回帰分析をしています.
|
3
6
|
以下のコードでは誤差が下がりません.
|
4
|
-
__イタリックテキスト__
|
5
7
|
結果はこんな感じです.
|
6
8
|
Epoch 303/500
|
7
9
|
400/400 [==============================] - 1s 4ms/step - loss: 100.0000 - val_loss: 100.0000
|
@@ -197,4 +199,7 @@
|
|
197
199
|
fa.close()
|
198
200
|
|
199
201
|
|
200
|
-
```
|
202
|
+
```
|
203
|
+
|
204
|
+
追記
|
205
|
+
出力層の活性化関数をlinearにしたところ誤差が25%まで下がりましたが,それ以降が下がらないです.
|