Why does pycharm occupies more and more memory while debugging a TensorFlow program?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP



Why does pycharm occupies more and more memory while debugging a TensorFlow program?



I followed (sentdex's Youtube video) to create a TensorFlow program and it worked well. I started to tweak some parameters(converting all float32 to float 16 to try to make it faster) in it and got errors. So I tried to debug it with Pycharm. Then I discovered that during debug, even if I was pausing on one line and there was no other process running at the same time, python2.7 keeps occupying more and more memory until it hits 100% and my computer freezes.



Is it that TensorFlow should not be debugged this way? Or is it that something else is wrong?


import tensorflow as tf
import time
import numpy as np

from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

n_node_hl1 = 500
n_node_hl2 = 500
n_node_hl3 = 500

n_classes = 10
batch_size = 100

x = tf.placeholder(tf.float16, [None, 784])
y = tf.placeholder(tf.float16)


def neural_network_model(data):
hidden_1_layer = 'weights': tf.Variable(tf.random_normal([784,, n_node_hl1], dtype=tf.float16)),
'bias': tf.Variable(tf.random_normal([n_node_hl1], dtype=tf.float16))
hidden_2_layer = 'weights': tf.Variable(tf.random_normal([n_node_hl1, n_node_hl2], dtype=tf.float16)),
'bias': tf.Variable(tf.random_normal([n_node_hl1], dtype=tf.float16))
hidden_3_layer = 'weights': tf.Variable(tf.random_normal([n_node_hl2, n_node_hl3], dtype=tf.float16)),
'bias': tf.Variable(tf.random_normal([n_node_hl1], dtype=tf.float16))
output_layer = 'weights': tf.Variable(tf.random_normal([n_node_hl3, n_classes], dtype=tf.float16)),
'bias': tf.Variable(tf.random_normal([n_classes], dtype=tf.float16))

l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['bias'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['bias'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2, hidden_3_layer['weights']), hidden_3_layer['bias'])
l3 = tf.nn.relu(l3)
output = tf.add(tf.matmul(l3, output_layer['weights']), output_layer['bias'])

return output


def train_neural_network(x, y):
prediction = neural_network_model(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.005).minimize(cost)

hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

for epoch in range(hm_epochs):
epoch_loss = 0
for _ in range(int(mnist.train.num_examples/batch_size)):
epoch_x, epoch_y = mnist.train.next_batch(batch_size)
epoch_x = epoch_x.astype(np.float16)
epoch_y = epoch_y.astype(np.float16)
_, c = sess.run([optimizer, cost], feed_dict=x: epoch_x, y: epoch_y)
epoch_loss = c
print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)

correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, tf.float16))
print('Accuracy: ', accuracy.eval(x: mnist.test.images, y: mnist.test.labels))
file_writer = tf.summary.FileWriter('/home/ramon/PycharmProjects/test/graph', sess.graph)


t = time.time()
train_neural_network(x, y)
print(time.time()-t)



Thanks for the edit and suggestion. I didn't post the code in the first place because I want people to focus on the debugging question and see if they have similar experience instead of focusing on my code because I'm pretty sure there's something wrong with my code(but that should not affect the debugging behavior of Pycharm).





Can you reproduce the problem in a program small enough to fit in your question? I find that trying to come up with a minimal reproducible example helps solve the problem, as well.
– Scott
Aug 10 at 16:58





Not really related, but important. The usual issue when copying some "expert’s" code: NEVER use tf.Variable. And some point you will shoot yourself in the foot.
– Patwie
Aug 10 at 20:39





@Patwie, I'm sorry, but why never use tf.Variable? Aren't the tf.Variable values to be tweaked by TensorFlow so that cost function is minimized?
– Ramon Zhi
Aug 16 at 16:44









By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

Firebase Auth - with Email and Password - Check user already registered

Dynamically update html content plain JS

How to determine optimal route across keyboard