JavaEar 专注于收集分享传播有价值的技术资料

How to deal with UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape

I am having the following warning in Tensorflow: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.

The reason I am getting this is:

# Flatten batch elements to rank-2 tensor where 1st max_length rows 
belong to first batch element and so forth
all_timesteps = tf.reshape(raw_output, [-1, n_dim])  # 
(batch_size*max_length, n_dim)
# Indices to last element of each sequence.
# Index to first element is the sequence order number times max 
sequence length.
# Index to last element is the index to first element plus sequence 
length.
row_inds = tf.range(0, batch_size) * max_length + (seq_len - 1)
# Gather rows with indices to last elements of sequences
# http://stackoverflow.com/questions/35892412/tensorflow-dense-
gradient-explanation
# This is due to gather returning IndexedSlice which is later 
converted 
into a Tensor for gradient
# calculation.
last_timesteps = tf.gather(all_timesteps, row_inds)  # (batch_size, 
n_dim)  

tf.gather is causing the issue. I have been ignoring it until now because my architectures were not really big. However, now, I have bigger architectures and a lot of data. I am facing Out of memory issues when training with batch sizes bigger than 10. I believe that dealing with this warning would allow me to fit my models inside the GPU.

Please note that I am using Tensorflow 1.3.

1个回答

    最佳答案
  1. I managed to solve the issue by using tf.dynnamic_partition instead of tf.gather . I replaced the above code like this:

        # Flatten batch elements to rank-2 tensor where 1st max_length rows belong to first batch element and so forth
        all_timesteps = tf.reshape(raw_output, [-1, n_dim])  # (batch_size*max_length, n_dim)
        # Indices to last element of each sequence.
        # Index to first element is the sequence order number times max sequence length.
        # Index to last element is the index to first element plus sequence length.
        row_inds = tf.range(0, batch_size) * max_length + (seq_len - 1)
        # Creating a vector of 0s and 1s that will specify what timesteps to choose.
        partitions = tf.reduce_sum(tf.one_hot(row_inds, tf.shape(all_timesteps)[0], dtype='int32'), 0)
        # Selecting the elements we want to choose.
        last_timesteps = tf.dynamic_partition(all_timesteps, partitions, 2)  # (batch_size, n_dim)
        last_timesteps = last_timesteps[1]