Tensorflow:Your Input Ran Out Of Data
Solution 1:
To make sure that you have "at least steps_per_epoch * epochs
batches", set the steps_per_epoch
to
steps_per_epoch = len(X_train)//batch_size
validation_steps = len(X_test)//batch_size # if you have validation data
You can see the maximum number of batches that model.fit()
can take by the progress bar when the training interrupts:
5230/10000 [==============>...............] - ETA: 2:05:22 - loss: 0.0570
Here, the maximum would be 5230 - 1
Importantly, keep in mind that by default, batch_size
is 32 in model.fit()
.
If you're using a tf.data.Dataset
, you can also add the repeat()
method, but be careful: it will loop indefinitely (unless you specify a number).
Solution 2:
I have also had a number of models crash with the same warnings while trying to train them. The training dataset if created using the tf.keras.preprocessing.image_dataset_from_directory() and split 80/20. I have created a variable to try and not run out of image. Using ResNet50 with my own images.....
TRAIN_STEPS_PER_EPOCH = np.ceil((image_count*0.8/BATCH_SIZE)-1)
# to ensure that there are enough images for training bahch
VAL_STEPS_PER_EPOCH = np.ceil((image_count*0.2/BATCH_SIZE)-1)
but it still does. BATCH_SIZE is set to 32 so i am taking 80% of the number of images and dividing by 32 then taking away 1 to have surplus.....or so i thought.
history = model.fit(
train_ds,
steps_per_epoch=TRAIN_STEPS_PER_EPOCH,
epochs=EPOCHS,
verbose = 1,
validation_data=val_ds,
validation_steps=VAL_STEPS_PER_EPOCH,
callbacks=tensorboard_callback)
Error after 3 hours processing a a single successful Epoch is:
Epoch 1/25 374/374 [==============================] - 8133s 22s/step - loss: 7.0126 - accuracy: 0.0028 - val_loss: 6.8585 - val_accuracy: 0.0000e+00 Epoch 2/25 1/374 [..............................] - ETA: 0s - loss: 6.0445 - accuracy: 0.0000e+00WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least
steps_per_epoch * epochs
batches (in this case, 9350.0 batches). You may need to use the repeat() function when building your dataset.
this might help....
> > print(train_ds) <BatchDataset shapes: ((None, 224, 224, 3), (None,)), types: (tf.float32, tf.int32)>
>
> print(val_ds) BatchDataset shapes: ((None, 224, 224, 3), (None,)),types: (tf.float32, tf.int32)>
>
> print(TRAIN_STEPS_PER_EPOCH)
> 374.0
>
> print(VAL_STEPS_PER_EPOCH)
> 93.0
Solution 3:
Solution which worked for me was to set drop_remainder=True
while generating the dataset. This automatically handles any extra data that is left over.
For example:
dataset = tf.data.Dataset.from_tensor_slices((images, targets)) \
.batch(12, drop_remainder=True)
Solution 4:
I had the same problem in TF 2.1. It has something to do with the shape/ type of the input, namely the query. In my case, I solved the problem as follows: (My model takes 3 inputs)
x = [[test_X[0][0]], [test_X[1][0]], [test_X[2][0]]]
MODEL.predict(x)
Output:
WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least
steps_per_epoch * epochs
batches (in this case, 7 batches). You may need to use the repeat() function when building your dataset.
array([[2.053718]], dtype=float32)
Answer :
x = [np.array([test_X[0][0]]), np.array([test_X[1][0]]), np.array([test_X[2][0]])]
MODEL.predict(x)
Output:
array([[2.053718]], dtype=float32)
Solution 5:
I had same problem and decreasing validation_steps from 50 to 10 solved the issue.
Post a Comment for "Tensorflow:Your Input Ran Out Of Data"