質問編集履歴

4

修正

2017/07/02 14:26

投稿

zakio49
zakio49

スコア29

test CHANGED
File without changes
test CHANGED
@@ -32,6 +32,14 @@
32
32
 
33
33
 
34
34
 
35
+ データ形式は1×13行列になっています。最初の1×1の値は学習データでは正解値が入るため、また、モデルの再利用時にinterface()関数を呼び出しているために同じデータ形式にするために0を入れています。
36
+
37
+ ````
38
+
39
+ 0,0.714285714,0.857142857,0.714285714,0.571428571,0.571428571,0.714285714,0.571428571,0.714285714,0.714285714,0.714285714,0.571428571,0.714285714
40
+
41
+ ````
42
+
35
43
 
36
44
 
37
45
  ````

3

修正

2017/07/02 14:26

投稿

zakio49
zakio49

スコア29

test CHANGED
File without changes
test CHANGED
@@ -32,6 +32,68 @@
32
32
 
33
33
 
34
34
 
35
+
36
+
37
+ ````
38
+
39
+ with tf.Session() as sess2:
40
+
41
+
42
+
43
+ #TRAIN_DATA_SIZE2 = 0 複数行CSVの時にすべて実践データにまわすために0行目を分割している
44
+
45
+ test2 = numpy.loadtxt(open("one_record.csv"), delimiter=",")
46
+
47
+ [tensor2,score2] = numpy.hsplit(test2, [1])
48
+
49
+ #[tensor_train2,tensor_test2] = numpy.vsplit(tensor2, [TRAIN_DATA_SIZE2])
50
+
51
+ #[score_train2, score_test2] = numpy.vsplit(score2, [TRAIN_DATA_SIZE2])
52
+
53
+ print(tensor2)
54
+
55
+ print(score2)
56
+
57
+ #モデルつくり
58
+
59
+ feed_dict_test2 = {
60
+
61
+
62
+
63
+ 複数行CSVの際にはvsplitして、tensor_test2,score_test2がそれぞれtensor,scoreにはいることで計算できていました。どう直せばいいでしょうか。
64
+
65
+ tensor_placeholder:tensor2,
66
+
67
+ score_placeholder:score2,
68
+
69
+ loss_label_placeholder:"loss_test2"
70
+
71
+ }
72
+
73
+
74
+
75
+ saver = tf.train.Saver()
76
+
77
+ cwd = os.getcwd()
78
+
79
+ saver.restore(sess2,cwd + "/model.ckpt")
80
+
81
+
82
+
83
+ print("recover")
84
+
85
+ best_match2 = sess2.run(output, feed_dict=feed_dict_test2)
86
+
87
+ print(best_match2)
88
+
89
+ print("fin")
90
+
91
+ sess2.close()
92
+
93
+ ````
94
+
95
+
96
+
35
97
  **
36
98
 
37
99
 
@@ -58,7 +120,231 @@
58
120
 
59
121
  [https://teratail.com/questions/82450](https://teratail.com/questions/82450)
60
122
 
123
+
124
+
125
+
126
+
127
+
128
+
129
+ エラーコード
130
+
131
+ ````
132
+
133
+ ValueError: Cannot feed value of shape (1,) for Tensor 'tensor_placeholder:0', which has shape '(?, 1)'
134
+
135
+ ````
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+ ````
146
+
147
+ import tensorflow as tf
148
+
149
+ import numpy
150
+
151
+ import os
152
+
153
+
154
+
155
+ cwd = os.getcwd()
156
+
157
+
158
+
159
+ SCORE_SIZE = 12
160
+
161
+ HIDDEN_UNIT_SIZE = 70
162
+
163
+ TRAIN_DATA_SIZE = 45
164
+
165
+
166
+
167
+ raw_input = numpy.loadtxt(open("test.csv"), delimiter=",")
168
+
169
+ [tensor, score] = numpy.hsplit(raw_input, [1])
170
+
171
+ [tensor_train, tensor_test] = numpy.vsplit(tensor, [TRAIN_DATA_SIZE])
172
+
173
+ [score_train, score_test] = numpy.vsplit(score, [TRAIN_DATA_SIZE])
174
+
175
+ #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
176
+
177
+
178
+
179
+ def inference(score_placeholder):
180
+
181
+ with tf.name_scope('hidden1') as scope:
182
+
183
+ hidden1_weight = tf.Variable(tf.truncated_normal([SCORE_SIZE, HIDDEN_UNIT_SIZE], stddev=0.1), name="hidden1_weight")
184
+
185
+ hidden1_bias = tf.Variable(tf.constant(0.1, shape=[HIDDEN_UNIT_SIZE]), name="hidden1_bias")
186
+
187
+ hidden1_output = tf.nn.relu(tf.matmul(score_placeholder, hidden1_weight) + hidden1_bias)
188
+
189
+ with tf.name_scope('output') as scope:
190
+
191
+ output_weight = tf.Variable(tf.truncated_normal([HIDDEN_UNIT_SIZE, 1], stddev=0.1), name="output_weight")
192
+
193
+ output_bias = tf.Variable(tf.constant(0.1, shape=[1]), name="output_bias")
194
+
195
+ output = tf.matmul(hidden1_output, output_weight) + output_bias
196
+
197
+ print(output)
198
+
199
+ return tf.nn.l2_normalize(output, 0)
200
+
201
+
202
+
203
+ def loss(output, tensor_placeholder, loss_label_placeholder):
204
+
205
+ with tf.name_scope('loss') as scope:
206
+
207
+ loss = tf.nn.l2_loss(output - tf.nn.l2_normalize(tensor_placeholder, 0))
208
+
209
+ tf.summary.scalar('loss_label_placeholder', loss)
210
+
211
+ return loss
212
+
213
+
214
+
215
+ def training(loss):
216
+
217
+ with tf.name_scope('training') as scope:
218
+
219
+ train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
220
+
221
+ return train_step
222
+
223
+
224
+
225
+
226
+
227
+
228
+
229
+ with tf.Graph().as_default():
230
+
231
+ tensor_placeholder = tf.placeholder("float", [None,1], name="tensor_placeholder")
232
+
233
+ score_placeholder = tf.placeholder("float", [None, SCORE_SIZE], name="score_placeholder")
234
+
235
+ loss_label_placeholder = tf.placeholder("string", name="loss_label_placeholder")
236
+
237
+
238
+
239
+ feed_dict_train={
240
+
241
+ tensor_placeholder: tensor_train,
242
+
243
+ score_placeholder: score_train,
244
+
245
+ loss_label_placeholder: "loss_train"
246
+
247
+ }
248
+
249
+ #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
250
+
251
+
252
+
253
+ feed_dict_test={
254
+
255
+ tensor_placeholder: tensor_test,
256
+
257
+ score_placeholder: score_test,
258
+
259
+ loss_label_placeholder: "loss_test"
260
+
261
+ }
262
+
263
+
264
+
265
+ output = inference(score_placeholder)
266
+
267
+ loss = loss(output, tensor_placeholder, loss_label_placeholder)
268
+
269
+ training_op = training(loss)
270
+
271
+ summary_op = tf.summary.merge_all()
272
+
273
+ init = tf.global_variables_initializer()
274
+
61
- test2を入れることで計算できていました。どう直せばいいでしょうか。
275
+ best_loss = float("inf")
276
+
277
+
278
+
279
+ with tf.Session() as sess:
280
+
281
+ summary_writer = tf.summary.FileWriter('data', graph_def=sess.graph_def)
282
+
283
+ sess.run(init)
284
+
285
+ for step in range(10000):
286
+
287
+ sess.run(training_op, feed_dict=feed_dict_train)
288
+
289
+ loss_test = sess.run(loss, feed_dict=feed_dict_test)
290
+
291
+ if loss_test < best_loss:
292
+
293
+ best_loss = loss_test
294
+
295
+ best_match = sess.run(output, feed_dict=feed_dict_test)
296
+
297
+ if step % 100 == 0:
298
+
299
+ summary_str = sess.run(summary_op, feed_dict=feed_dict_test)
300
+
301
+ summary_str += sess.run(summary_op, feed_dict=feed_dict_train)
302
+
303
+ summary_writer.add_summary(summary_str, step)
304
+
305
+
306
+
307
+ saver=tf.train.Saver()
308
+
309
+ saver.save(sess,cwd+'/model.ckpt')
310
+
311
+ print(cwd)
312
+
313
+ print(best_match)
314
+
315
+ print('Saved a model.')
316
+
317
+ sess.close()
318
+
319
+
320
+
321
+ with tf.Session() as sess2:
322
+
323
+ #変数の読み込み
324
+
325
+ #新しいデータ
326
+
327
+ TRAIN_DATA_SIZE2 = 0
328
+
329
+ test2 = numpy.loadtxt(open("one_record.csv"), delimiter=",")
330
+
331
+ [tensor2,score2] = numpy.hsplit(test2, [1])
332
+
333
+ #[tensor_train2,tensor_test2] = numpy.vsplit(tensor2, [TRAIN_DATA_SIZE2])
334
+
335
+ #[score_train2, score_test2] = numpy.vsplit(score2, [TRAIN_DATA_SIZE2])
336
+
337
+ #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
338
+
339
+ print(tensor2)
340
+
341
+ print(score2)
342
+
343
+ #モデルつくり
344
+
345
+ feed_dict_test2 = {
346
+
347
+ tensor_placeholder:tensor2,
62
348
 
63
349
  score_placeholder:score2,
64
350
 
@@ -66,7 +352,7 @@
66
352
 
67
353
  }
68
354
 
69
-
355
+ #復元して、損失関数で定まった、重みをもとに予想を行う関数にいれる
70
356
 
71
357
  saver = tf.train.Saver()
72
358
 
@@ -87,259 +373,3 @@
87
373
  sess2.close()
88
374
 
89
375
  ````
90
-
91
-
92
-
93
-
94
-
95
- エラーコード
96
-
97
- ````
98
-
99
- ValueError: Cannot feed value of shape (1,) for Tensor 'tensor_placeholder:0', which has shape '(?, 1)'
100
-
101
- ````
102
-
103
-
104
-
105
-
106
-
107
-
108
-
109
-
110
-
111
- ````
112
-
113
-
114
-
115
-
116
-
117
- import tensorflow as tf
118
-
119
- import numpy
120
-
121
- import os
122
-
123
-
124
-
125
- cwd = os.getcwd()
126
-
127
-
128
-
129
- SCORE_SIZE = 12
130
-
131
- HIDDEN_UNIT_SIZE = 70
132
-
133
- TRAIN_DATA_SIZE = 45
134
-
135
-
136
-
137
- raw_input = numpy.loadtxt(open("test.csv"), delimiter=",")
138
-
139
- [tensor, score] = numpy.hsplit(raw_input, [1])
140
-
141
- [tensor_train, tensor_test] = numpy.vsplit(tensor, [TRAIN_DATA_SIZE])
142
-
143
- [score_train, score_test] = numpy.vsplit(score, [TRAIN_DATA_SIZE])
144
-
145
- #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
146
-
147
-
148
-
149
- def inference(score_placeholder):
150
-
151
- with tf.name_scope('hidden1') as scope:
152
-
153
- hidden1_weight = tf.Variable(tf.truncated_normal([SCORE_SIZE, HIDDEN_UNIT_SIZE], stddev=0.1), name="hidden1_weight")
154
-
155
- hidden1_bias = tf.Variable(tf.constant(0.1, shape=[HIDDEN_UNIT_SIZE]), name="hidden1_bias")
156
-
157
- hidden1_output = tf.nn.relu(tf.matmul(score_placeholder, hidden1_weight) + hidden1_bias)
158
-
159
- with tf.name_scope('output') as scope:
160
-
161
- output_weight = tf.Variable(tf.truncated_normal([HIDDEN_UNIT_SIZE, 1], stddev=0.1), name="output_weight")
162
-
163
- output_bias = tf.Variable(tf.constant(0.1, shape=[1]), name="output_bias")
164
-
165
- output = tf.matmul(hidden1_output, output_weight) + output_bias
166
-
167
- print(output)
168
-
169
- return tf.nn.l2_normalize(output, 0)
170
-
171
-
172
-
173
- def loss(output, tensor_placeholder, loss_label_placeholder):
174
-
175
- with tf.name_scope('loss') as scope:
176
-
177
- loss = tf.nn.l2_loss(output - tf.nn.l2_normalize(tensor_placeholder, 0))
178
-
179
- tf.summary.scalar('loss_label_placeholder', loss)
180
-
181
- return loss
182
-
183
-
184
-
185
- def training(loss):
186
-
187
- with tf.name_scope('training') as scope:
188
-
189
- train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
190
-
191
- return train_step
192
-
193
-
194
-
195
-
196
-
197
-
198
-
199
- with tf.Graph().as_default():
200
-
201
- tensor_placeholder = tf.placeholder("float", [None,1], name="tensor_placeholder")
202
-
203
- score_placeholder = tf.placeholder("float", [None, SCORE_SIZE], name="score_placeholder")
204
-
205
- loss_label_placeholder = tf.placeholder("string", name="loss_label_placeholder")
206
-
207
-
208
-
209
- feed_dict_train={
210
-
211
- tensor_placeholder: tensor_train,
212
-
213
- score_placeholder: score_train,
214
-
215
- loss_label_placeholder: "loss_train"
216
-
217
- }
218
-
219
- #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
220
-
221
-
222
-
223
- feed_dict_test={
224
-
225
- tensor_placeholder: tensor_test,
226
-
227
- score_placeholder: score_test,
228
-
229
- loss_label_placeholder: "loss_test"
230
-
231
- }
232
-
233
-
234
-
235
- output = inference(score_placeholder)
236
-
237
- loss = loss(output, tensor_placeholder, loss_label_placeholder)
238
-
239
- training_op = training(loss)
240
-
241
- summary_op = tf.summary.merge_all()
242
-
243
- init = tf.global_variables_initializer()
244
-
245
- best_loss = float("inf")
246
-
247
-
248
-
249
- with tf.Session() as sess:
250
-
251
- summary_writer = tf.summary.FileWriter('data', graph_def=sess.graph_def)
252
-
253
- sess.run(init)
254
-
255
- for step in range(10000):
256
-
257
- sess.run(training_op, feed_dict=feed_dict_train)
258
-
259
- loss_test = sess.run(loss, feed_dict=feed_dict_test)
260
-
261
- if loss_test < best_loss:
262
-
263
- best_loss = loss_test
264
-
265
- best_match = sess.run(output, feed_dict=feed_dict_test)
266
-
267
- if step % 100 == 0:
268
-
269
- summary_str = sess.run(summary_op, feed_dict=feed_dict_test)
270
-
271
- summary_str += sess.run(summary_op, feed_dict=feed_dict_train)
272
-
273
- summary_writer.add_summary(summary_str, step)
274
-
275
-
276
-
277
- saver=tf.train.Saver()
278
-
279
- saver.save(sess,cwd+'/model.ckpt')
280
-
281
- print(cwd)
282
-
283
- print(best_match)
284
-
285
- print('Saved a model.')
286
-
287
- sess.close()
288
-
289
-
290
-
291
- with tf.Session() as sess2:
292
-
293
- #変数の読み込み
294
-
295
- #新しいデータ
296
-
297
- TRAIN_DATA_SIZE2 = 0
298
-
299
- test2 = numpy.loadtxt(open("one_record.csv"), delimiter=",")
300
-
301
- [tensor2,score2] = numpy.hsplit(test2, [1])
302
-
303
- #[tensor_train2,tensor_test2] = numpy.vsplit(tensor2, [TRAIN_DATA_SIZE2])
304
-
305
- #[score_train2, score_test2] = numpy.vsplit(score2, [TRAIN_DATA_SIZE2])
306
-
307
- #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
308
-
309
- print(tensor2)
310
-
311
- print(score2)
312
-
313
- #モデルつくり
314
-
315
- feed_dict_test2 = {
316
-
317
- tensor_placeholder:tensor2,
318
-
319
- score_placeholder:score2,
320
-
321
- loss_label_placeholder:"loss_test2"
322
-
323
- }
324
-
325
- #復元して、損失関数で定まった、重みをもとに予想を行う関数にいれる
326
-
327
- saver = tf.train.Saver()
328
-
329
- cwd = os.getcwd()
330
-
331
- saver.restore(sess2,cwd + "/model.ckpt")
332
-
333
-
334
-
335
- print("recover")
336
-
337
- best_match2 = sess2.run(output, feed_dict=feed_dict_test2)
338
-
339
- print(best_match2)
340
-
341
- print("fin")
342
-
343
- sess2.close()
344
-
345
- ````

2

修正

2017/07/01 22:08

投稿

zakio49
zakio49

スコア29

test CHANGED
File without changes
test CHANGED
@@ -10,9 +10,13 @@
10
10
 
11
11
  1,学習用csvファイル:13列×50行のデータになっており、最初の一列目に正解データが記入されています。
12
12
 
13
+ このようなデータ形式をとっています。(00001000のような2-13列はありません)
14
+
15
+ [https://gist.github.com/sergeant-wizard/b2c548fbd3b3a01b23ca](https://gist.github.com/sergeant-wizard/b2c548fbd3b3a01b23ca)
16
+
13
17
  2,学習の際には一度hpsplitを行って分割して[tensor,score]に分ける
14
18
 
15
- 3,50行のうち、45行を学習にあて、5行をテストに分ける.
19
+ 3,50行のうち、45行を学習にあて、5行をテスト予想に分ける.
16
20
 
17
21
  **4,学習したのちに復元を行い、実践データを入れる。**
18
22
 
@@ -20,11 +24,9 @@
20
24
 
21
25
  →これでエラーがでたので、実践データの一列目に0を入れて13列にして対応した。
22
26
 
23
- →成功して復元可能。ユーザーの入力に対して予想したいと考えているので、1行のデータでできるようにしたい。
27
+ →成功して復元可能。**ユーザーの入力に対して予想したいと考えているので、1行のデータでできるようにしたい。**
24
-
28
+
25
- *実践データが1行のみの時にvsplitが使えないので、コメントつけさせていただいています。
29
+ *実践データが1行のみの時にvsplitが使えないので、13列をを1:12にhpsplitを行って代入するも、行列数がtensor_placeholderとcsvと対応していないみたいでエラになります。こshapeの形治すのに苦戦しおり、お力をかしていただきたす。
26
-
27
-
28
30
 
29
31
 
30
32
 
@@ -52,13 +54,241 @@
52
54
 
53
55
 
54
56
 
57
+ Tensorflow:一度学習したものの損失関数の値を保持・復元して、新たな入力に対して予測を行いたい
58
+
59
+ [https://teratail.com/questions/82450](https://teratail.com/questions/82450)
60
+
61
+ test2を入れることで計算できていました。どう直せばいいでしょうか。
62
+
63
+ score_placeholder:score2,
64
+
65
+ loss_label_placeholder:"loss_test2"
66
+
67
+ }
68
+
69
+
70
+
71
+ saver = tf.train.Saver()
72
+
73
+ cwd = os.getcwd()
74
+
55
- ・実行環境はwindows10+Anaconda+python3.5+tensorflow1.0~になります
75
+ saver.restore(sess2,cwd + "/model.ckpt")
76
+
77
+
78
+
56
-
79
+ print("recover")
80
+
81
+ best_match2 = sess2.run(output, feed_dict=feed_dict_test2)
82
+
83
+ print(best_match2)
84
+
85
+ print("fin")
86
+
87
+ sess2.close()
88
+
57
- ````
89
+ ````
90
+
91
+
92
+
93
+
94
+
58
-
95
+ エラーコード
96
+
59
-
97
+ ````
98
+
60
-
99
+ ValueError: Cannot feed value of shape (1,) for Tensor 'tensor_placeholder:0', which has shape '(?, 1)'
100
+
101
+ ````
102
+
103
+
104
+
105
+
106
+
107
+
108
+
109
+
110
+
111
+ ````
112
+
113
+
114
+
115
+
116
+
117
+ import tensorflow as tf
118
+
119
+ import numpy
120
+
121
+ import os
122
+
123
+
124
+
125
+ cwd = os.getcwd()
126
+
127
+
128
+
129
+ SCORE_SIZE = 12
130
+
131
+ HIDDEN_UNIT_SIZE = 70
132
+
133
+ TRAIN_DATA_SIZE = 45
134
+
135
+
136
+
137
+ raw_input = numpy.loadtxt(open("test.csv"), delimiter=",")
138
+
139
+ [tensor, score] = numpy.hsplit(raw_input, [1])
140
+
141
+ [tensor_train, tensor_test] = numpy.vsplit(tensor, [TRAIN_DATA_SIZE])
142
+
143
+ [score_train, score_test] = numpy.vsplit(score, [TRAIN_DATA_SIZE])
144
+
145
+ #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
146
+
147
+
148
+
149
+ def inference(score_placeholder):
150
+
151
+ with tf.name_scope('hidden1') as scope:
152
+
153
+ hidden1_weight = tf.Variable(tf.truncated_normal([SCORE_SIZE, HIDDEN_UNIT_SIZE], stddev=0.1), name="hidden1_weight")
154
+
155
+ hidden1_bias = tf.Variable(tf.constant(0.1, shape=[HIDDEN_UNIT_SIZE]), name="hidden1_bias")
156
+
157
+ hidden1_output = tf.nn.relu(tf.matmul(score_placeholder, hidden1_weight) + hidden1_bias)
158
+
159
+ with tf.name_scope('output') as scope:
160
+
161
+ output_weight = tf.Variable(tf.truncated_normal([HIDDEN_UNIT_SIZE, 1], stddev=0.1), name="output_weight")
162
+
163
+ output_bias = tf.Variable(tf.constant(0.1, shape=[1]), name="output_bias")
164
+
165
+ output = tf.matmul(hidden1_output, output_weight) + output_bias
166
+
167
+ print(output)
168
+
169
+ return tf.nn.l2_normalize(output, 0)
170
+
171
+
172
+
173
+ def loss(output, tensor_placeholder, loss_label_placeholder):
174
+
175
+ with tf.name_scope('loss') as scope:
176
+
177
+ loss = tf.nn.l2_loss(output - tf.nn.l2_normalize(tensor_placeholder, 0))
178
+
179
+ tf.summary.scalar('loss_label_placeholder', loss)
180
+
181
+ return loss
182
+
183
+
184
+
185
+ def training(loss):
186
+
187
+ with tf.name_scope('training') as scope:
188
+
189
+ train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
190
+
191
+ return train_step
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+ with tf.Graph().as_default():
200
+
201
+ tensor_placeholder = tf.placeholder("float", [None,1], name="tensor_placeholder")
202
+
203
+ score_placeholder = tf.placeholder("float", [None, SCORE_SIZE], name="score_placeholder")
204
+
205
+ loss_label_placeholder = tf.placeholder("string", name="loss_label_placeholder")
206
+
207
+
208
+
209
+ feed_dict_train={
210
+
211
+ tensor_placeholder: tensor_train,
212
+
213
+ score_placeholder: score_train,
214
+
215
+ loss_label_placeholder: "loss_train"
216
+
217
+ }
218
+
219
+ #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
220
+
221
+
222
+
223
+ feed_dict_test={
224
+
225
+ tensor_placeholder: tensor_test,
226
+
227
+ score_placeholder: score_test,
228
+
229
+ loss_label_placeholder: "loss_test"
230
+
231
+ }
232
+
233
+
234
+
235
+ output = inference(score_placeholder)
236
+
237
+ loss = loss(output, tensor_placeholder, loss_label_placeholder)
238
+
239
+ training_op = training(loss)
240
+
241
+ summary_op = tf.summary.merge_all()
242
+
243
+ init = tf.global_variables_initializer()
244
+
245
+ best_loss = float("inf")
246
+
247
+
248
+
249
+ with tf.Session() as sess:
250
+
251
+ summary_writer = tf.summary.FileWriter('data', graph_def=sess.graph_def)
252
+
253
+ sess.run(init)
254
+
255
+ for step in range(10000):
256
+
257
+ sess.run(training_op, feed_dict=feed_dict_train)
258
+
259
+ loss_test = sess.run(loss, feed_dict=feed_dict_test)
260
+
261
+ if loss_test < best_loss:
262
+
263
+ best_loss = loss_test
264
+
265
+ best_match = sess.run(output, feed_dict=feed_dict_test)
266
+
267
+ if step % 100 == 0:
268
+
269
+ summary_str = sess.run(summary_op, feed_dict=feed_dict_test)
270
+
271
+ summary_str += sess.run(summary_op, feed_dict=feed_dict_train)
272
+
273
+ summary_writer.add_summary(summary_str, step)
274
+
275
+
276
+
277
+ saver=tf.train.Saver()
278
+
279
+ saver.save(sess,cwd+'/model.ckpt')
280
+
281
+ print(cwd)
282
+
283
+ print(best_match)
284
+
285
+ print('Saved a model.')
286
+
287
+ sess.close()
288
+
289
+
290
+
61
- with tf.Session() as sess2:
291
+ with tf.Session() as sess2:
62
292
 
63
293
  #変数の読み込み
64
294
 
@@ -66,8 +296,6 @@
66
296
 
67
297
  TRAIN_DATA_SIZE2 = 0
68
298
 
69
-
70
-
71
299
  test2 = numpy.loadtxt(open("one_record.csv"), delimiter=",")
72
300
 
73
301
  [tensor2,score2] = numpy.hsplit(test2, [1])
@@ -115,259 +343,3 @@
115
343
  sess2.close()
116
344
 
117
345
  ````
118
-
119
-
120
-
121
-
122
-
123
- エラーコード
124
-
125
- ````
126
-
127
- ValueError: Cannot feed value of shape (1,) for Tensor 'tensor_placeholder:0', which has shape '(?, 1)'
128
-
129
- ````
130
-
131
-
132
-
133
-
134
-
135
-
136
-
137
-
138
-
139
- ````
140
-
141
-
142
-
143
-
144
-
145
- import tensorflow as tf
146
-
147
- import numpy
148
-
149
- import os
150
-
151
-
152
-
153
- cwd = os.getcwd()
154
-
155
-
156
-
157
- SCORE_SIZE = 12
158
-
159
- HIDDEN_UNIT_SIZE = 70
160
-
161
- TRAIN_DATA_SIZE = 45
162
-
163
-
164
-
165
- raw_input = numpy.loadtxt(open("test.csv"), delimiter=",")
166
-
167
- [tensor, score] = numpy.hsplit(raw_input, [1])
168
-
169
- [tensor_train, tensor_test] = numpy.vsplit(tensor, [TRAIN_DATA_SIZE])
170
-
171
- [score_train, score_test] = numpy.vsplit(score, [TRAIN_DATA_SIZE])
172
-
173
- #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
174
-
175
-
176
-
177
- def inference(score_placeholder):
178
-
179
- with tf.name_scope('hidden1') as scope:
180
-
181
- hidden1_weight = tf.Variable(tf.truncated_normal([SCORE_SIZE, HIDDEN_UNIT_SIZE], stddev=0.1), name="hidden1_weight")
182
-
183
- hidden1_bias = tf.Variable(tf.constant(0.1, shape=[HIDDEN_UNIT_SIZE]), name="hidden1_bias")
184
-
185
- hidden1_output = tf.nn.relu(tf.matmul(score_placeholder, hidden1_weight) + hidden1_bias)
186
-
187
- with tf.name_scope('output') as scope:
188
-
189
- output_weight = tf.Variable(tf.truncated_normal([HIDDEN_UNIT_SIZE, 1], stddev=0.1), name="output_weight")
190
-
191
- output_bias = tf.Variable(tf.constant(0.1, shape=[1]), name="output_bias")
192
-
193
- output = tf.matmul(hidden1_output, output_weight) + output_bias
194
-
195
- print(output)
196
-
197
- return tf.nn.l2_normalize(output, 0)
198
-
199
-
200
-
201
- def loss(output, tensor_placeholder, loss_label_placeholder):
202
-
203
- with tf.name_scope('loss') as scope:
204
-
205
- loss = tf.nn.l2_loss(output - tf.nn.l2_normalize(tensor_placeholder, 0))
206
-
207
- tf.summary.scalar('loss_label_placeholder', loss)
208
-
209
- return loss
210
-
211
-
212
-
213
- def training(loss):
214
-
215
- with tf.name_scope('training') as scope:
216
-
217
- train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
218
-
219
- return train_step
220
-
221
-
222
-
223
-
224
-
225
-
226
-
227
- with tf.Graph().as_default():
228
-
229
- tensor_placeholder = tf.placeholder("float", [None,1], name="tensor_placeholder")
230
-
231
- score_placeholder = tf.placeholder("float", [None, SCORE_SIZE], name="score_placeholder")
232
-
233
- loss_label_placeholder = tf.placeholder("string", name="loss_label_placeholder")
234
-
235
-
236
-
237
- feed_dict_train={
238
-
239
- tensor_placeholder: tensor_train,
240
-
241
- score_placeholder: score_train,
242
-
243
- loss_label_placeholder: "loss_train"
244
-
245
- }
246
-
247
- #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
248
-
249
-
250
-
251
- feed_dict_test={
252
-
253
- tensor_placeholder: tensor_test,
254
-
255
- score_placeholder: score_test,
256
-
257
- loss_label_placeholder: "loss_test"
258
-
259
- }
260
-
261
-
262
-
263
- output = inference(score_placeholder)
264
-
265
- loss = loss(output, tensor_placeholder, loss_label_placeholder)
266
-
267
- training_op = training(loss)
268
-
269
- summary_op = tf.summary.merge_all()
270
-
271
- init = tf.global_variables_initializer()
272
-
273
- best_loss = float("inf")
274
-
275
-
276
-
277
- with tf.Session() as sess:
278
-
279
- summary_writer = tf.summary.FileWriter('data', graph_def=sess.graph_def)
280
-
281
- sess.run(init)
282
-
283
- for step in range(10000):
284
-
285
- sess.run(training_op, feed_dict=feed_dict_train)
286
-
287
- loss_test = sess.run(loss, feed_dict=feed_dict_test)
288
-
289
- if loss_test < best_loss:
290
-
291
- best_loss = loss_test
292
-
293
- best_match = sess.run(output, feed_dict=feed_dict_test)
294
-
295
- if step % 100 == 0:
296
-
297
- summary_str = sess.run(summary_op, feed_dict=feed_dict_test)
298
-
299
- summary_str += sess.run(summary_op, feed_dict=feed_dict_train)
300
-
301
- summary_writer.add_summary(summary_str, step)
302
-
303
-
304
-
305
- saver=tf.train.Saver()
306
-
307
- saver.save(sess,cwd+'/model.ckpt')
308
-
309
- print(cwd)
310
-
311
- print(best_match)
312
-
313
- print('Saved a model.')
314
-
315
- sess.close()
316
-
317
-
318
-
319
- with tf.Session() as sess2:
320
-
321
- #変数の読み込み
322
-
323
- #新しいデータ
324
-
325
- TRAIN_DATA_SIZE2 = 0
326
-
327
- test2 = numpy.loadtxt(open("one_record.csv"), delimiter=",")
328
-
329
- [tensor2,score2] = numpy.hsplit(test2, [1])
330
-
331
- #[tensor_train2,tensor_test2] = numpy.vsplit(tensor2, [TRAIN_DATA_SIZE2])
332
-
333
- #[score_train2, score_test2] = numpy.vsplit(score2, [TRAIN_DATA_SIZE2])
334
-
335
- #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
336
-
337
- print(tensor2)
338
-
339
- print(score2)
340
-
341
- #モデルつくり
342
-
343
- feed_dict_test2 = {
344
-
345
- tensor_placeholder:tensor2,
346
-
347
- score_placeholder:score2,
348
-
349
- loss_label_placeholder:"loss_test2"
350
-
351
- }
352
-
353
- #復元して、損失関数で定まった、重みをもとに予想を行う関数にいれる
354
-
355
- saver = tf.train.Saver()
356
-
357
- cwd = os.getcwd()
358
-
359
- saver.restore(sess2,cwd + "/model.ckpt")
360
-
361
-
362
-
363
- print("recover")
364
-
365
- best_match2 = sess2.run(output, feed_dict=feed_dict_test2)
366
-
367
- print(best_match2)
368
-
369
- print("fin")
370
-
371
- sess2.close()
372
-
373
- ````

1

修正

2017/07/01 22:00

投稿

zakio49
zakio49

スコア29

test CHANGED
File without changes
test CHANGED
@@ -8,6 +8,32 @@
8
8
 
9
9
 
10
10
 
11
+ 1,学習用csvファイル:13列×50行のデータになっており、最初の一列目に正解データが記入されています。
12
+
13
+ 2,学習の際には一度hpsplitを行って分割して[tensor,score]に分ける
14
+
15
+ 3,50行のうち、45行を学習にあて、5行をテストに分ける.
16
+
17
+ **4,学習したのちに復元を行い、実践データを入れる。**
18
+
19
+ *実践データは正解値を含まないので学習用から一つ列の減った12列×?行になる。
20
+
21
+ →これでエラーがでたので、実践データの一列目に0を入れて13列にして対応した。
22
+
23
+ →成功して復元可能。ユーザーの入力に対して予想したいと考えているので、1行のデータでできるようにしたい。
24
+
25
+ *実践データが1行のみの時にvsplitが使えないので、今のコードのコメントをつけさせていただいています。
26
+
27
+
28
+
29
+
30
+
31
+
32
+
33
+ **
34
+
35
+
36
+
11
37
 
12
38
 
13
39
  ・参考リンク
@@ -28,40 +54,6 @@
28
54
 
29
55
  ・実行環境はwindows10+Anaconda+python3.5+tensorflow1.0~になります
30
56
 
31
-
32
-
33
- 現状の理解・躓いていると思っている箇所
34
-
35
- *修正7/1 17:02
36
-
37
-
38
-
39
-
40
-
41
- 色々な点を修正させていただき新たな疑問として
42
-
43
-
44
-
45
- 復元したモデルを再利用する流れについては
46
-
47
- 1.変数の読み込み(csvファイル)
48
-
49
- 2.使えるデータ形式に変化()
50
-
51
- 3.学習したときに使ったinterface(score_placeholder)で使える形にモデルを作る
52
-
53
- 4.meta.ckptを復元して、重みをもとに予想を行う関数(interface)にいれる
54
-
55
- 5,sess.runを行い予想値をプリントする。
56
-
57
-
58
-
59
- **ただ、実践データは学習データと異なり正解のデータがないのでcsvファイルの列も一つ少ないので、モデルの組み方が異なると、最初に学習の時に使ったinterface()関数をそのまま使えないのでwith tf.Session() as sess2:ないで新たに関数を定義したらmodel.ckptの値が反映できないのではないかとそのあたりで躓いています。
60
-
61
- 現状の実践データ正解データも込みで行っていますがエラーが発生しておりなかなかうまくいきません。よろしくお願いいたします
62
-
63
- **
64
-
65
57
  ````
66
58
 
67
59
 
@@ -74,6 +66,8 @@
74
66
 
75
67
  TRAIN_DATA_SIZE2 = 0
76
68
 
69
+
70
+
77
71
  test2 = numpy.loadtxt(open("one_record.csv"), delimiter=",")
78
72
 
79
73
  [tensor2,score2] = numpy.hsplit(test2, [1])