質問するログイン新規登録

質問編集履歴

1

コード修正しました

2017/07/09 11:20

投稿

zakio49
zakio49

スコア29

title CHANGED
@@ -1,1 +1,1 @@
1
- tensorflow csvのデータ形式
1
+ tensorflow shapeの整え方をお尋ねしたいです。csvのデータ形式
body CHANGED
@@ -1,12 +1,14 @@
1
1
  tensorflowで12個の要素から学習を行い、restoreを行い実践データをセットした際に1×12の行列ではなくて、shape[1]のスカラー値を要求されエラーになります。shape[1]という要求は読み込みデータを
2
2
 
3
3
  今のcsv
4
+ rank 1 shape [1,12]
4
5
  ````
5
6
  [ 0.71428573, 0.85714287, 0.71428573, 0.5714286 , 0.5714286 ,
6
7
  0.71428573, 0.5714286 , 0.71428573, 0.71428573, 0.71428573,
7
8
  0.5714286 , 0.71428573]
8
9
  ````
10
+ こう変えることでshape[12,1,1] shape[1]という要求を満たせるですか?
9
- こう変える
11
+ お知恵を貸してください!
10
12
  ````
11
13
  [ [0.71428573], [0.85714287], [0.71428573], [0.5714286] , [0.5714286] ,
12
14
  [0.71428573], [0.5714286] , [0.71428573], [0.71428573], [0.71428573],
@@ -17,4 +19,117 @@
17
19
  ValueError: Argument must be a dense tensor: [array([ 0.71428573, 0.85714287, 0.71428573, 0.5714286 , 0.5714286 ,
18
20
  0.71428573, 0.5714286 , 0.71428573, 0.71428573, 0.71428573,
19
21
  0.5714286 , 0.71428573], dtype=float32)] - got shape [1, 12], but wanted [1].
20
- ````
22
+ ````
23
+ その他のコード(問題なのはsession2のほうです)
24
+
25
+ ```
26
+
27
+ import tensorflow as tf
28
+ import numpy
29
+ import os
30
+
31
+ cwd = os.getcwd()
32
+
33
+ SCORE_SIZE = 12
34
+ HIDDEN_UNIT_SIZE = 40
35
+ TRAIN_DATA_SIZE = 45
36
+ TACK = 1
37
+ raw_input = numpy.loadtxt(open("test.csv"), delimiter=",")
38
+ [tensor, score] = numpy.hsplit(raw_input, [1])
39
+ [tensor_train, tensor_test] = numpy.vsplit(tensor, [TRAIN_DATA_SIZE])
40
+ [score_train, score_test] = numpy.vsplit(score, [TRAIN_DATA_SIZE])
41
+ print(score_test)
42
+ #tensorは正解データtrainは学習モデル、scoreは学習データ、testは実データ
43
+
44
+ def inference(score_placeholder):
45
+ with tf.name_scope('hidden1') as scope:
46
+ hidden1_weight = tf.Variable(tf.truncated_normal([SCORE_SIZE, HIDDEN_UNIT_SIZE], stddev=0.01), name="hidden1_weight")
47
+ hidden1_bias = tf.Variable(tf.constant(0.1, shape=[HIDDEN_UNIT_SIZE]), name="hidden1_bias")
48
+ hidden1_output = tf.nn.relu(tf.matmul(score_placeholder, hidden1_weight) + hidden1_bias)
49
+ with tf.name_scope('output') as scope:
50
+ output_weight = tf.Variable(tf.truncated_normal([HIDDEN_UNIT_SIZE, 1], stddev=0.01), name="output_weight")
51
+ output_bias = tf.Variable(tf.constant(0.1, shape=[1]), name="output_bias")
52
+ output = tf.matmul(hidden1_output, output_weight) + output_bias
53
+ if TACK != 1:
54
+ print("saku1")
55
+ print(output)
56
+ else:
57
+ print("saku2")
58
+
59
+ return tf.nn.l2_normalize(output, 0)
60
+
61
+ def loss(output, tensor_placeholder, loss_label_placeholder):
62
+ with tf.name_scope('loss') as scope:
63
+ loss = tf.nn.l2_loss(output - tf.nn.l2_normalize(tensor_placeholder, 0))
64
+ tf.summary.scalar('loss_label_placeholder', loss)
65
+ return loss
66
+
67
+ def training(loss):
68
+ with tf.name_scope('training') as scope:
69
+ train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
70
+ return train_step
71
+
72
+
73
+
74
+ with tf.Graph().as_default():
75
+ tensor_placeholder = tf.placeholder(tf.float32, [None, 1], name="tensor_placeholder")
76
+ score_placeholder = tf.placeholder(tf.float32, [None, SCORE_SIZE], name="score_placeholder")
77
+ loss_label_placeholder = tf.placeholder("string", name="loss_label_placeholder")
78
+
79
+ feed_dict_train={
80
+ tensor_placeholder: tensor_train,
81
+ score_placeholder: score_train,
82
+ loss_label_placeholder: "loss_train"
83
+ }
84
+
85
+ feed_dict_test={
86
+ tensor_placeholder: tensor_test,
87
+ score_placeholder: score_test,
88
+ loss_label_placeholder: "loss_test"
89
+ }
90
+
91
+ output = inference(score_placeholder)
92
+ loss = loss(output, tensor_placeholder, loss_label_placeholder)
93
+ training_op = training(loss)
94
+ summary_op = tf.summary.merge_all()
95
+ init = tf.global_variables_initializer()
96
+ best_loss = float("inf")
97
+
98
+ with tf.Session() as sess:
99
+ summary_writer = tf.summary.FileWriter('data', graph_def=sess.graph_def)
100
+ sess.run(init)
101
+ for step in range(10000):
102
+ sess.run(training_op, feed_dict=feed_dict_train)
103
+ loss_test = sess.run(loss, feed_dict=feed_dict_test)
104
+ if loss_test < best_loss:
105
+ best_loss = loss_test
106
+ best_match = sess.run(output, feed_dict=feed_dict_test)
107
+ #if step % 100 == 0:
108
+ # summary_str = sess.run(summary_op, feed_dict=feed_dict_test)
109
+ # summary_str += sess.run(summary_op, feed_dict=feed_dict_train)
110
+ # summary_writer.add_summary(summary_str, step)
111
+
112
+ saver=tf.train.Saver()
113
+ saver.save(sess,cwd+'/model.ckpt')
114
+ print(cwd)
115
+ print(best_match)
116
+ print('Saved a model.')
117
+ sess.close()
118
+
119
+ with tf.Session() as sess2:
120
+ #変数の読み込み
121
+ summary_writer = tf.summary.FileWriter('data', graph=sess2.graph)
122
+ #sess2.run(init)
123
+ #新しいデータ
124
+ TRAIN_DATA_SIZE2 = 0
125
+ test2 = numpy.loadtxt(open("one_record.csv"), delimiter=",").astype(numpy.float32)
126
+ score3 = [test2]
127
+ print(score3)
128
+ saver = tf.train.Saver()
129
+ cwd = os.getcwd()
130
+ saver.restore(sess2,cwd + "/model.ckpt")
131
+ best_match2 = sess2.run(inference(score3))
132
+ print(best_match2)
133
+ print("fin")
134
+ sess2.close()
135
+ ```