Skip to content Skip to sidebar Skip to footer

Input Contains NaN, Infinity Or A Value Too Large For Dtype('float64') In Tensorflow

I am trying to train a LSTM and in my model I have an exponential learning rate decay and a dropout layer. In order to deactivate the dropout layer when testing and validating, I h

Solution 1:

When using tf.layers.dropout the rate argument tells how much of the data to drop when you give 1.0 all the output is gone, replace 1.0 with 0.0 and it should work. TensorFlow documentation: https://www.tensorflow.org/api_docs/python/tf/layers/dropout


Solution 2:

I am putting this because even though @Almog's answer was correct it didn't have the explanation I wanted. So for anyone confused like me:

If you use:

'tf.nn.dropout()'

to deactivate the dropout layer you should put

keep_prob= 1.0 not keep_prob=0.0

as keep_prob means 'The probability that each element is kept.' So keeping it as 1.0 makes sense to deactivate it.

If you are using

'tf.layers.dropout()'

you should put:

rate=0.0 not rate=1.0

as rate here means 'The dropout rate (should be between 0 and 1). E.g. "rate=0.1" would drop out 10% of input units'. So if I put rate=0.0 it means that none of the input units will be dropped.


Post a Comment for "Input Contains NaN, Infinity Or A Value Too Large For Dtype('float64') In Tensorflow"