Programing

FailedPreconditionError : Tensorflow에서 초기화되지 않은 사용 시도

lottogame 2020. 12. 7. 07:39
반응형

FailedPreconditionError : Tensorflow에서 초기화되지 않은 사용 시도


저는 "이상한"형식을 사용하여 데이터를 업로드하는 TensorFlow 튜토리얼 을 진행하고 있습니다. scikit-learn 결과와 비교할 수 있도록 데이터에 NumPy 또는 pandas 형식을 사용하고 싶습니다.

Kaggle에서 숫자 인식 데이터를 얻습니다 : https://www.kaggle.com/c/digit-recognizer/data .

다음은 TensorFlow 튜토리얼의 코드입니다 (잘 작동 함).

# Stuff from tensorflow tutorial 
import tensorflow as tf

sess = tf.InteractiveSession()

x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])

W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

y = tf.nn.softmax(tf.matmul(x, W) + b)

cross_entropy = -tf.reduce_sum(y_ * tf.log(y))

train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

여기에서 데이터를 읽고 대상 변수를 제거하고 데이터를 테스트 및 훈련 데이터 세트로 분할합니다 (모두 잘 작동합니다).

# Read dataframe from training data
csvfile='train.csv'
from pandas import DataFrame, read_csv
df = read_csv(csvfile)

# Strip off the target data and make it a separate dataframe.
Target = df.label
del df["label"]

# Split data into training and testing sets
msk = np.random.rand(len(df)) < 0.8
dfTest = df[~msk]
TargetTest = Target[~msk]
df = df[msk]
Target = Target[msk]

# One hot encode the target
OHTarget=pd.get_dummies(Target)
OHTargetTest=pd.get_dummies(TargetTest)

이제 훈련 단계를 실행하려고하면 다음이 표시됩니다 FailedPreconditionError.

for i in range(100):
    batch = np.array(df[i*50:i*50+50].values)
    batch = np.multiply(batch, 1.0 / 255.0)
    Target_batch = np.array(OHTarget[i*50:i*50+50].values)
    Target_batch = np.multiply(Target_batch, 1.0 / 255.0)
    train_step.run(feed_dict={x: batch, y_: Target_batch})

전체 오류는 다음과 같습니다.

---------------------------------------------------------------------------
FailedPreconditionError                   Traceback (most recent call last)
<ipython-input-82-967faab7d494> in <module>()
      4     Target_batch = np.array(OHTarget[i*50:i*50+50].values)
      5     Target_batch = np.multiply(Target_batch, 1.0 / 255.0)
----> 6     train_step.run(feed_dict={x: batch, y_: Target_batch})

/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in run(self, feed_dict, session)
   1265         none, the default session will be used.
   1266     """
-> 1267     _run_using_default_session(self, feed_dict, self.graph, session)
   1268
   1269

/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _run_using_default_session(operation, feed_dict, graph, session)
   2761                        "the operation's graph is different from the session's "
   2762                        "graph.")
-> 2763   session.run(operation, feed_dict)
   2764
   2765

/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict)
    343
    344     # Run request and get response.
--> 345     results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
    346
    347     # User may have fetched the same tensor multiple times, but we

/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _do_run(self, target_list, fetch_list, feed_dict)
    417         # pylint: disable=protected-access
    418         raise errors._make_specific_exception(node_def, op, e.error_message,
--> 419                                               e.code)
    420         # pylint: enable=protected-access
    421       raise e_type, e_value, e_traceback

FailedPreconditionError: Attempting to use uninitialized value Variable_1
     [[Node: gradients/add_grad/Shape_1 = Shape[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_1)]]
Caused by op u'gradients/add_grad/Shape_1', defined at:
  File "/Users/user32/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    ...........

...which was originally created as op u'add', defined at:
  File "/Users/user32/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
[elided 17 identical lines from previous traceback]
  File "/Users/user32/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3066, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-45-59183d86e462>", line 1, in <module>
    y = tf.nn.softmax(tf.matmul(x,W) + b)
  File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 403, in binary_op_wrapper
    return func(x, y, name=name)
  File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 44, in add
    return _op_def_lib.apply_op("Add", x=x, y=y, name=name)
  File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
    op_def=op_def)
  File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__
    self._traceback = _extract_stack()

이 문제를 어떻게 해결할 수 있는지에 대한 아이디어가 있습니까?


The FailedPreconditionError arises because the program is attempting to read a variable (named "Variable_1") before it has been initialized. In TensorFlow, all variables must be explicitly initialized, by running their "initializer" operations. For convenience, you can run all of the variable initializers in the current session by executing the following statement before your training loop:

tf.initialize_all_variables().run()

Note that this answer assumes that, as in the question, you are using tf.InteractiveSession, which allows you to run operations without specifying a session. For non-interactive uses, it is more common to use tf.Session, and initialize as follows:

init_op = tf.initialize_all_variables()

sess = tf.Session()
sess.run(init_op)

tf.initialize_all_variables() is deprecated. Instead initialize tensorflow variables with:

tf.global_variables_initializer()

A common example usage is:

with tf.Session() as sess:
     sess.run(tf.global_variables_initializer())

From official documentation, FailedPreconditionError

This exception is most commonly raised when running an operation that reads a tf.Variable before it has been initialized.

In your case the error even explains what variable was not initialized: Attempting to use uninitialized value Variable_1. One of the TF tutorials explains a lot about variables, their creation/initialization/saving/loading

Basically to initialize the variable you have 3 options:

I almost always use the first approach. Remember you should put it inside a session run. So you will get something like this:

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

If your are curious about more information about variables, read this documentation to know how to report_uninitialized_variables and check is_variable_initialized.


Different use case, but set your session as the default session did the trick for me:

with sess.as_default():
    result = compute_fn([seed_input,1])

This is one of these mistakes that is so obvious, once you have solved it.

My use-case is the following:
1) store keras VGG16 as tensorflow graph
2) load kers VGG16 as a graph
3) run tf function on the graph and get:

FailedPreconditionError: Attempting to use uninitialized value block1_conv2/bias
     [[Node: block1_conv2/bias/read = Identity[T=DT_FLOAT, _class=["loc:@block1_conv2/bias"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](block1_conv2/bias)]]
     [[Node: predictions/Softmax/_7 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_168_predictions/Softmax", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

I got this error message from a completely different case. It seemed that the exception handler in tensorflow raised it. You can check each row in the Traceback. In my case, it happened in tensorflow/python/lib/io/file_io.py, because this file contained a different bug, where self.__mode and self.__name weren't initialized, and it needed to call self._FileIO__mode, and self_FileIO__name instead.


You have to initialize variables before using them.​

If you try to evaluate the variables before initializing them you'll run into: FailedPreconditionError: Attempting to use uninitialized value tensor.

The easiest way is initializing all variables at once using: tf.global_variables_initializer()

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)

You use sess.run(init) to run the initializer, without fetching any value.

To initialize only a subset of variables, you use tf.variables_initializer() listing the variables:

var_ab = tf.variables_initializer([a, b], name="a_and_b")
with tf.Session() as sess:
    sess.run(var_ab)

You can also initialize each variable separately using tf.Variable.initializer

# create variable W as 784 x 10 tensor, filled with zeros
W = tf.Variable(tf.zeros([784,10])) with tf.Session() as sess:
    sess.run(W.initializer)

When I had this issue with tf.train.string_input_producer() and tf.train.batch() initializing the local variables before I started the Coordinator solved the problem. I had been getting the error when I initialized the local variables after starting the Coordinator.


The FailedPreconditionError comes because the session is trying to read a variable that hasn"t been initialized.

As of Tensorflow version 1.11.0, you need to take this :

init_op = tf.global_variables_initializer()

sess = tf.Session()
sess.run(init_op)

Possibly something has changed in recent TensorFlow builds, because for me, running

sess = tf.Session()
sess.run(tf.local_variables_initializer())

before fitting any models seems to do the trick. Most older examples and comments seem to suggest tf.global_variables_initializer().

참고URL : https://stackoverflow.com/questions/34001922/failedpreconditionerror-attempting-to-use-uninitialized-in-tensorflow

반응형