How To Get Stable Results With Tensorflow, Setting Random Seed
Solution 1:
Setting the current TensorFlow random seed affects the current default graph only. Since you are creating a new graph for your training and setting it as default (with g.as_default():
), you must set the random seed within the scope of that with
block.
For example, your loop should look like the following:
for i in range(3):
g = tf.Graph()
with g.as_default():
tf.set_random_seed(1)
accuracy_result, average_error = network.train_network(
parameters, inputHeight, inputWidth, inputChannels, outputClasses)
Note that this will use the same random seed for each iteration of the outer for
loop. If you want to use a different—but still deterministic—seed in each iteration, you can use tf.set_random_seed(i + 1)
.
Solution 2:
Deterministic behaviour can be obtained either by supplying a graph-level or an operation-level seed. Both worked for me. A graph-level seed can be placed with tf.set_random_seed. An operation-level seed can be placed e.g, in a variable intializer as in:
myvar = tf.Variable(tf.truncated_normal(((10,10)), stddev=0.1, seed=0))
Solution 3:
Tensorflow 2.0 Compatible Answer: For Tensorflow version greater than 2.0, if we want to set the Global Random Seed, the Command used is tf.random.set_seed
.
If we are migrating from Tensorflow Version 1.x to 2.x
, we can use the command,
tf.compat.v2.random.set_seed
.
Note that tf.function
acts like a re-run of a program in this case.
To set the Operation Level Seed (as answered above), we can use the command, tf.random.uniform([1], seed=1)
.
For more details, refer this Tensorflow Page.
Solution 4:
Backend Setup: cuda:10.1
, cudnn: 7
, tensorflow-gpu: 2.1.0
, keras: 2.2.4-tf
, and vgg19
customized model
After looking into the issue of unstable results for tensorflow backend with GPU training and large neural network models based on keras, I was finally able to get reproducible (stable) results as follows:
- Import only those libraries that would be required for setting seed and initialize a seed value
import tensorflow as tf
import os
import numpy as np
import random
SEED = 0
- Function to initialize seeds for all libraries which might have stochastic behavior
def set_seeds(seed=SEED):
os.environ['PYTHONHASHSEED'] = str(seed)
random.seed(seed)
tf.random.set_seed(seed)
np.random.seed(seed)
- Activate
Tensorflow
deterministic behavior
def set_global_determinism(seed=SEED):
set_seeds(seed=seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'os.environ['TF_CUDNN_DETERMINISTIC'] = '1'
tf.config.threading.set_inter_op_parallelism_threads(1)
tf.config.threading.set_intra_op_parallelism_threads(1)
# Call the above functionwithseedvalueset_global_determinism(seed=SEED)
Important notes:
- Please call the above code before executing any other code
- Model training might become slower since the code is deterministic, hence there's a tradeoff
- I experimented several times with a varying number of epochs and different settings (including model.fit() with shuffle=True) and the above code gives me reproducible results.
References:
- https://suneeta-mall.github.io/2019/12/22/Reproducible-ml-tensorflow.html
- https://keras.io/getting_started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development
- https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads
- https://www.tensorflow.org/api_docs/python/tf/random/set_seed?version=nightly
Solution 5:
It seems as if none of these answers will work due to underlying implementation issues in CuDNN.
You can get a bit more determinism by adding an extra flag
os.environ['PYTHONHASHSEED']=str(SEED)
os.environ['TF_CUDNN_DETERMINISTIC'] = '1'# new flag present in tf 2.0+
random.seed(SEED)
np.random.seed(SEED)
tf.set_random_seed(SEED)
But this still won't be entirely deterministic. To get an even more exact solution, you need use the procedure outlined in this nvidia repo.
Post a Comment for "How To Get Stable Results With Tensorflow, Setting Random Seed"