TensorflowのTutorialの画像キャプション生成のタスクで、encoder及びdecoderのモデル全体の保存をしようとしたら、encoderは難なく保存できましたが、decoderの保存ができなくて行き詰っています。
それぞれバージョンです
Python:3.8.10 Tensorflow : 2.3.0
decoderのプログラムがこちらです。
class BahdanauAttention(tf.keras.Model):#アテンションモデル def __init__(self, units): super(BahdanauAttention, self).__init__() self.W1 = tf.keras.layers.Dense(units) self.W2 = tf.keras.layers.Dense(units) self.V = tf.keras.layers.Dense(1) def call(self, features, hidden): # features(CNN_encoder output) shape == (batch_size, 64, embedding_dim) # hidden shape == (batch_size, hidden_size) # hidden_with_time_axis shape == (batch_size, 1, hidden_size) hidden_with_time_axis = tf.expand_dims(hidden, 1) # score shape == (batch_size, 64, hidden_size) score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis)) # attention_weights shape == (batch_size, 64, 1) # score を self.V に適用するので、最後の軸は 1 となる attention_weights = tf.nn.softmax(self.V(score), axis=1) # 合計をとったあとの context_vector の shpae == (batch_size, hidden_size) context_vector = attention_weights * features context_vector = tf.reduce_sum(context_vector, axis=1) return context_vector, attention_weights class RNN_Decoder(tf.keras.Model): def __init__(self, embedding_dim, units, vocab_size): super(RNN_Decoder, self).__init__() self.units = units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(self.units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') self.fc1 = tf.keras.layers.Dense(self.units) self.fc2 = tf.keras.layers.Dense(vocab_size) self.attention = BahdanauAttention(self.units) def call(self, x, features, hidden): # アテンションを別のモデルとして定義 context_vector, attention_weights = self.attention(features, hidden) # embedding 層を通過したあとの x の shape == (batch_size, 1, embedding_dim) x = self.embedding(x) # 結合後の x の shape == (batch_size, 1, embedding_dim + hidden_size) x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) # 結合したベクトルを GRU に渡す output, state = self.gru(x) # shape == (batch_size, max_length, hidden_size) x = self.fc1(output) # x shape == (batch_size * max_length, hidden_size) x = tf.reshape(x, (-1, x.shape[2])) # output shape == (batch_size * max_length, vocab) x = self.fc2(x) return x, state, attention_weights def reset_state(self, batch_size): return tf.zeros((batch_size, self.units))
保存しようとしたプログラムがこちらです
decoder.save("saved_model/decoder")
実行結果です
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-50-3e1e31b58585> in <module> ----> 1 decoder.save("saved_model/decoder") ~\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\keras\engine\training.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options) 1976 ``` 1977 """ -> 1978 save.save_model(self, filepath, overwrite, include_optimizer, save_format, 1979 signatures, options) 1980 . . . ~\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\autograph\impl\api.py in wrapper(*args, **kwargs) 300 def wrapper(*args, **kwargs): 301 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED): --> 302 return func(*args, **kwargs) 303 304 if inspect.isfunction(func) or inspect.ismethod(func): TypeError: call() missing 2 required positional arguments: 'features' and 'hidden'
エラーが多すぎるので途中省略しています。よろしくお願いします
あなたの回答
tips
プレビュー