[SOLVED] Keras / Tensorflow incompatible shape

Issue

I’m pretty new to tensorflow / keras and I can’t find a fix to this problem. I have a training data set of ~4000 20-dimensional vectors that each describe a document. I also have those same document-vectors at a later state. I want to predict how the document-vector will be at the end from the initial state. I compared the document vectors at state 0 with their final state using cosine similarity and got about .5. The goal is to improve that with a simple model. Currently i am doing:

model = Sequential()
model.add(Dense(20, activation='relu', input_dim=20))
model.compile(optimizer='adam', loss='cosine_similarity', metrics [tf.keras.metrics.CosineSimilarity(axis=1)])
model.summary()
history = model.fit(input_train, y_train,
                epochs=30,
                batch_size=16,
                validation_data=(input_test,y_test),
                callbacks=[tbCallBack]
               )

After 30 epochs this gives me a validation cosine similarity of .66, so my guess is that this actually did improve my initial cosine similarity and produced at least some sort of value added.

Then I want to look at the predictions to see if they make any sense:

lol = np.asarray([0.0125064 , 0.01250269, 0.01250133, 0.01250481, 0.01250508,
   0.0125009 , 0.0125009 , 0.01250437, 0.01250131, 0.01250181,
   0.01250403, 0.0125038 , 0.01250372, 0.01250246, 0.01250183,
   0.01250226, 0.01250294, 0.76244247, 0.01250485, 0.01250205])
model.predict([lol])
#model.predict(lol)

Both predict versions give me the following error:

WARNING:tensorflow:Model was constructed with shape (None, 20) for input KerasTensor(type_spec=TensorSpec(shape=(None, 20), dtype=tf.float32, name='dense_69_input'), name='dense_69_input', description="created by layer 'dense_69_input'"), but it was called on an input with incompatible shape (None,).

Does someone know how to solve this? Also, if someone is familiar with this kind of goal, is this the right way? Is there something I can do differently?

Any help is very much appreciated!

Solution

Try np.expand_dims to add the batch dimension to your array:

import tensorflow as tf
import numpy as np

model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(20, activation='relu', input_dim=20))
model.compile(optimizer='adam', loss='cosine_similarity', metrics= [tf.keras.metrics.CosineSimilarity(axis=1)])
model.summary()
input_train = tf.random.normal((5, 20))
y_train = tf.random.normal((5, 20))
history = model.fit(input_train, y_train,
                epochs=1,
                batch_size=2)

lol = np.asarray([0.0125064 , 0.01250269, 0.01250133, 0.01250481, 0.01250508,
   0.0125009 , 0.0125009 , 0.01250437, 0.01250131, 0.01250181,
   0.01250403, 0.0125038 , 0.01250372, 0.01250246, 0.01250183,
   0.01250226, 0.01250294, 0.76244247, 0.01250485, 0.01250205])
lol = np.expand_dims(lol, axis=0)
model.predict(lol)
array([[0.0727988 , 0.        , 0.3008919 , 0.00460427, 0.        ,
        0.01472487, 0.31665963, 0.11831823, 0.        , 0.05261957,
        0.        , 0.        , 0.        , 0.        , 0.13595472,
        0.07765757, 0.09340346, 0.        , 0.        , 0.        ]],
      dtype=float32)

Answered By – AloneTogether

Answer Checked By – Marilyn (BugsFixing Volunteer)

Leave a Reply

Your email address will not be published. Required fields are marked *