JavaEar 专注于收集分享传播有价值的技术资料

Tensorflow : Graph is finalized and cannot be modified

I am trying to save variables through checkpoints to introduce fault tolerance to my program. I am trying to achieve this by using the MonitoredTrainingSession function. The following is my configuration:-

import tensorflow as tf

global_step = tf.Variable(10, trainable=False, name='global_step')
x = tf.constant(2)

with tf.device("/job:local/task:0"):
    y1 = tf.Variable(x + 300)

with tf.device("/job:local/task:1"):
    y2 = tf.Variable(x**2)

with tf.device("/job:local/task:2"):
    y3 = tf.Variable(5*x)

with tf.device("/job:local/task:3"):
    y0 = tf.Variable(x - 66)
    y = y0 + y1 + y2 + y3

model = tf.global_variables_initializer()
saver = tf.train.Saver(sharded=True)

chief = tf.train.ChiefSessionCreator(scaffold=None, master='grpc://localhost:2222', config=None, checkpoint_dir='/home/tensorflow/codes/checkpoints')
summary_hook = tf.train.SummarySaverHook(save_steps=None, save_secs=10, output_dir='/home/tensorflow/codes/savepoints', summary_writer=None, scaffold=None, summary_op=tf.summary.tensor_summary(name="y", tensor=y))
saver_hook = tf.train.CheckpointSaverHook(checkpoint_dir='/home/tensorflow/codes/checkpoints', save_secs=None, save_steps=True, saver=saver, checkpoint_basename='model.ckpt', scaffold=None)

# with tf.train.MonitoredSession(session_creator=ChiefSessionCreator,hooks=[saver_hook, summary_hook]) as sess:

with tf.train.MonitoredTrainingSession(master='grpc://localhost:2222', is_chief=True, checkpoint_dir='/home/tensorflow/codes/checkpoints',
    scaffold=None, hooks=[saver_hook,summary_hook], chief_only_hooks=None, save_checkpoint_secs=None, save_summaries_steps=True, config=None) as sess:

    while not sess.should_stop():

    while not sess.should_stop():
        result =

I get the following RuntimeError which I am unable to resolve:-

Traceback (most recent call last):
  File "", line 39, in <module>
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/", line 1187, in global_variables_initializer
    return variables_initializer(global_variables())
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/", line 1169, in variables_initializer
    return*[v.initializer for v in var_list], name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/", line 2773, in group
    deps.append(_GroupControlDeps(dev, ops_on_device[dev]))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/", line 2721, in _GroupControlDeps
    return no_op(name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/", line 186, in no_op
    result = _op_def_lib.apply_op("NoOp", name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/", line 759, in apply_op
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/", line 2199, in create_op
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/", line 1925, in _check_not_finalized
    raise RuntimeError("Graph is finalized and cannot be modified.")
RuntimeError: Graph is finalized and cannot be modified.


  1. The root cause for your error seems to be that MonitoredTrainingSession has finalized (frozen) the graph and your tf.global_variable_initializer() is no longer able to modify it.

    Having said that, there are multiple things that require attention:

    1) Why do you try to repeatedly initialize all variables here?

    while not sess.should_stop():

    2) It seems some of your code is already included in MonitoredTrainingSession, e.g. ChiefSessionCreator. Can you please take another look at the code ( or search for its sample usage and see how MonitoredTrainingSession is supposed to be used?

  2. 参考答案2
  3. If you want to initialize the graph on loop, you can use the function to create new graph on top of loop.

    import tensorflow as tf
  4. 参考答案3
  5. Since your aim is to use MonitoredTrainingSession to get you checkpointing, the usage is much simpler than your example:

    import tensorflow as tf
    global_step = tf.contrib.framework.get_or_create_global_step()
    x = tf.constant(2)
    y1 = x + 300
    y2 = x**2
    y3 = x * 5
    y0 = x - 66
    y = y0 + y1 + y2 + y3
    step = tf.assign_add(global_step, 1)
    with tf.train.MonitoredTrainingSession(checkpoint_dir='/tmp/checkpoints') as sess:
        while not sess.should_stop():
            result, i =[y, step])
            print(result, i)
    • The hooks for saving/restoring checkpoints are created by MonitoredTrainingSession for you.
    • If you pass in save_checkpoint_secs you can change the frequency of checkpointing from the 10 minute default. I find a higher frequency isn't worth it: saving checkpoints isn't free, so very frequent checkpointing will end up slowing training down.
    • The ChiefSessionCreator and gRPC config is only needed for distributed running (see here for a description of the concepts. Similarly with assigning ops to specific devices - make sure you really need to do this before using it as it can slow things down if you're not careful.
    • You don't need to wrap the result of operations on tensors with tf.Variable() - they already are variables.
    • You can pass save_summaries_steps for monitoring training with tensorboard, but by default that'll happen every 100 steps anyway.