質問編集履歴

2

結果の変更・層数の変更

2018/10/23 07:42

投稿

FALLOT
FALLOT

スコア16

test CHANGED
File without changes
test CHANGED
@@ -12,21 +12,23 @@
12
12
 
13
13
  結果はこんな感じです.
14
14
 
15
- Epoch 303/500
15
+
16
-
17
- 400/400 [==============================] - 1s 4ms/step - loss: 100.0000 - val_loss: 100.0000
16
+
18
-
19
- Epoch 304/500
17
+ Epoch 84/500
20
-
18
+
21
- 400/400 [==============================] - 1s 4ms/step - loss: 100.0000 - val_loss: 100.0000
19
+ 1600/1600 [==============================] - 5s 3ms/step - loss: 14.8227 - val_loss: 5.6889
22
-
20
+
23
- Epoch 305/500
21
+ Epoch 85/500
24
-
22
+
25
- 400/400 [==============================] - 1s 3ms/step - loss: 100.0000 - val_loss: 100.0000
23
+ 1600/1600 [==============================] - 5s 3ms/step - loss: 15.6330 - val_loss: 6.1703
26
-
24
+
27
- Epoch 306/500
25
+ Epoch 86/500
28
-
26
+
29
- 224/400 [===============>..............] - ETA: 0s - loss: 100.0000
27
+ 1600/1600 [==============================] - 5s 3ms/step - loss: 15.7420 - val_loss: 6.5914
28
+
29
+ Epoch 87/500
30
+
31
+ 1600/1600 [==============================] - 5s 3ms/step - loss: 15.3729 - val_loss: 3.6529
30
32
 
31
33
 
32
34
 
@@ -42,7 +44,9 @@
42
44
 
43
45
 
44
46
 
45
- ```ここに言語を入力#最大応力の値の予測
47
+ ```ここに言語を入力
48
+
49
+ #最大応力の値の予測
46
50
 
47
51
  from keras.models import Sequential
48
52
 
@@ -82,22 +86,14 @@
82
86
 
83
87
  print("開始時刻: " + str(start_time))
84
88
 
85
-
86
-
87
-
88
-
89
89
  #それぞれの画像の枚数を入力
90
90
 
91
- A = 250
91
+ A = 1000
92
-
92
+
93
- B = 250
93
+ B = 1000
94
94
 
95
95
  sum =A+B
96
96
 
97
-
98
-
99
-
100
-
101
97
  # 学習用のデータを作る.
102
98
 
103
99
  image_list = []
@@ -124,7 +120,7 @@
124
120
 
125
121
  #学習率
126
122
 
127
- LR = 0.0001
123
+ LR = 0.00001
128
124
 
129
125
  #訓練データの数 train=sum
130
126
 
@@ -238,7 +234,7 @@
238
234
 
239
235
 
240
236
 
241
- model.add(Dense(5000, input_dim=Z,kernel_initializer='random_uniform',bias_initializer='zeros'))
237
+ model.add(Dense(8000, input_dim=Z,kernel_initializer='random_uniform',bias_initializer='zeros'))
242
238
 
243
239
  #model.add(Activation("LeakyReLU"))
244
240
 
@@ -248,67 +244,33 @@
248
244
 
249
245
 
250
246
 
251
- model.add(Dense(5000,kernel_initializer='random_uniform',bias_initializer='zeros'))
247
+ model.add(Dense(100,kernel_initializer='random_uniform',bias_initializer='zeros'))
252
248
 
253
249
  model.add(LeakyReLU())
254
250
 
255
- model.add(Dropout(0.5))
251
+ model.add(Dropout(0.2))
256
-
257
-
258
-
252
+
253
+
254
+
259
- model.add(Dense(2000,kernel_initializer='random_uniform',bias_initializer='zeros'))
255
+ model.add(Dense(50,kernel_initializer='random_uniform',bias_initializer='zeros'))
260
256
 
261
257
  model.add(LeakyReLU())
262
258
 
263
- model.add(Dropout(0.5))
259
+ model.add(Dropout(0.2))
264
-
265
-
266
-
260
+
261
+
262
+
267
- model.add(Dense(1000,kernel_initializer='random_uniform',bias_initializer='zeros'))
263
+ model.add(Dense(10,kernel_initializer='random_uniform',bias_initializer='zeros'))
268
264
 
269
265
  model.add(LeakyReLU())
270
266
 
271
- model.add(Dropout(0.5))
272
-
273
-
274
-
275
- model.add(Dense(500,kernel_initializer='random_uniform',bias_initializer='zeros'))
276
-
277
- model.add(LeakyReLU())
278
-
279
- model.add(Dropout(0.5))
280
-
281
-
282
-
283
- model.add(Dense(100,kernel_initializer='random_uniform',bias_initializer='zeros'))
284
-
285
- model.add(LeakyReLU())
286
-
287
- model.add(Dropout(0.5))
288
-
289
-
290
-
291
- model.add(Dense(50,kernel_initializer='random_uniform',bias_initializer='zeros'))
292
-
293
- model.add(LeakyReLU())
294
-
295
267
  model.add(Dropout(0.2))
296
268
 
297
269
 
298
270
 
299
- model.add(Dense(20,kernel_initializer='random_uniform',bias_initializer='zeros'))
300
-
301
- model.add(LeakyReLU())
302
-
303
- model.add(Dropout(0.2))
304
-
305
-
306
-
307
-
308
-
309
271
  model.add(Dense(1))
310
272
 
311
- model.add(Activation("softmax"))
273
+ model.add(Activation("linear"))
312
274
 
313
275
 
314
276
 
@@ -374,20 +336,12 @@
374
336
 
375
337
 
376
338
 
377
-
378
-
379
-
380
-
381
339
  end_time = time.time()
382
340
 
383
341
  print("\n終了時刻: ",end_time)
384
342
 
385
343
  print ("かかった時間: ", (end_time - start_time))
386
344
 
387
-
388
-
389
-
390
-
391
345
  ttime = end_time - start_time
392
346
 
393
347
  fa = open("result/TIME.txt","w")
@@ -400,6 +354,8 @@
400
354
 
401
355
 
402
356
 
357
+
358
+
403
359
  ```
404
360
 
405
361
 

1

入力データの追記・出力層の活性化関数の変更

2018/10/23 07:42

投稿

FALLOT
FALLOT

スコア16

test CHANGED
File without changes
test CHANGED
@@ -1,11 +1,15 @@
1
+ ![![この画像が今回の入力データです](4d6446c49a1c3b85f49a179ac5b37ef2.png)](331918f553351c8eab81c787a0a6fef9.png)
2
+
3
+ 今回の入力データです.二値化しています.
4
+
5
+
6
+
1
7
  ### 誤差が減らない
2
8
 
3
9
  kerasを用いて画像を用いた回帰分析をしています.
4
10
 
5
11
  以下のコードでは誤差が下がりません.
6
12
 
7
- __イタリックテキスト__
8
-
9
13
  結果はこんな感じです.
10
14
 
11
15
  Epoch 303/500
@@ -397,3 +401,9 @@
397
401
 
398
402
 
399
403
  ```
404
+
405
+
406
+
407
+ 追記
408
+
409
+ 出力層の活性化関数をlinearにしたところ誤差が25%まで下がりましたが,それ以降が下がらないです.