Programing

Tensorflow 디버깅 정보 비활성화

lottogame 2020. 7. 8. 08:15
반응형

Tensorflow 디버깅 정보 비활성화


정보를 디버깅한다는 것은 TensorFlow가 터미널에로드 된 라이브러리 및 Python 오류가 아닌 장치 등을 발견 한 것을 의미합니다.

I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: Graphics Device
major: 5 minor: 2 memoryClockRate (GHz) 1.0885
pciBusID 0000:04:00.0
Total memory: 12.00GiB
Free memory: 11.83GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:717] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Graphics Device, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.0KiB
...

다음을 사용하여 모든 디버깅 로그를 비활성화 할 수 있습니다 os.environ.

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 
import tensorflow as tf

tf 0.12 및 1.0에서 테스트

자세한 내용은

0 = all messages are logged (default behavior)
1 = INFO messages are not printed
2 = INFO and WARNING messages are not printed
3 = INFO, WARNING, and ERROR messages are not printed

TF 2.0 알파를 통해 작동하는 v0.12 + 업데이트 (5/20/17) :

TensorFlow 0.12+에서는이 문제 에 따라 환경 변수를 통해 로깅을 제어 할 수 있습니다 TF_CPP_MIN_LOG_LEVEL. 기본값은 0 (모든 로그 표시)이지만 Level아래에서 다음 값 중 하나로 설정할 수 있습니다 .

  Level | Level for Humans | Level Description                  
 -------|------------------|------------------------------------ 
  0     | DEBUG            | [Default] Print all messages       
  1     | INFO             | Filter out INFO messages           
  2     | WARNING          | Filter out INFO & WARNING messages 
  3     | ERROR            | Filter out all messages      

Python을 사용하는 다음 일반 OS 예제를 참조하십시오.

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}
import tensorflow as tf

철저하게 tf_logging말하면, 요약 ops, tensorboard, 다양한 추정기 등에서 사용되는 Python 모듈 의 레벨을 설정하십시오 .

# append to lines above
tf.logging.set_verbosity(tf.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}


TensorFlow 또는 TF-Learn 로깅의 이전 버전 (v0.11.x 이하)의 경우 :

TensorFlow 로깅에 대한 정보는 아래 페이지를 참조하십시오. 새 업데이트와 함께, 당신은 하나에 로깅 상세를 설정할 수있어 DEBUG, INFO, WARN, ERROR, 또는 FATAL. 예를 들면 다음과 같습니다.

tf.logging.set_verbosity(tf.logging.ERROR)

이 페이지는 TF-Learn 모델과 함께 사용할 수있는 모니터를 추가로 검토합니다. 여기에 페이지가 있습니다.

이것은 하지 않습니다 (단 TF는-알아보기)하지만, 모든 로깅을 차단합니다. 두 가지 해결책이 있습니다. 하나는 '기술적으로 올바른'솔루션 (Linux)이고 다른 하나는 TensorFlow 재 구축과 관련이 있습니다.

script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'

다른 방법 은 소스 수정 및 TensorFlow 재 구축과 관련된 이 답변참조하십시오 .


I have had this problem as well (on tensorflow-0.10.0rc0), but could not fix the excessive nose tests logging problem via the suggested answers.

I managed to solve this by probing directly into the tensorflow logger. Not the most correct of fixes, but works great and only pollutes the test files which directly or indirectly import tensorflow:

# Place this before directly or indirectly importing tensorflow
import logging
logging.getLogger("tensorflow").setLevel(logging.WARNING)

For compatibility with Tensorflow 2.0, you can use tf.get_logger

import logging
tf.get_logger().setLevel(logging.ERROR)

As TF_CPP_MIN_LOG_LEVEL didn't work for me you can try:

tf.logging.set_verbosity(tf.logging.WARN)

Worked for me in tensorflow v1.6.0


Usual python3 log manager works for me with tensorflow==1.11.0:

import logging
logging.getLogger('tensorflow').setLevel(logging.INFO)

To add some flexibility here, you can achieve more fine-grained control over the level of logging by writing a function that filters out messages however you like:

logging.getLogger('tensorflow').addFilter(my_filter_func)

where my_filter_func accepts a LogRecord object as input [LogRecord docs] and returns zero if you want the message thrown out; nonzero otherwise.

Here's an example filter that only keeps every nth info message (Python 3 due to the use of nonlocal here):

def keep_every_nth_info(n):
    i = -1
    def filter_record(record):
        nonlocal i
        i += 1
        return int(record.levelname != 'INFO' or i % n == 0)
    return filter_record

# Example usage for TensorFlow:
logging.getLogger('tensorflow').addFilter(keep_every_nth_info(5))

All of the above has assumed that TensorFlow has set up its logging state already. You can ensure this without side effects by calling tf.logging.get_verbosity() before adding a filter.


Yeah, I'm using tf 2.0-beta and want to enable/disable the default logging. The environment variable and methods in tf1.X don't seem to exist anymore.

I stepped around in PDB and found this to work:

# close the TF2 logger
tf2logger = tf.get_logger()
tf2logger.error('Close TF2 logger handlers')
tf2logger.root.removeHandler(tf2logger.root.handlers[0])

I then add my own logger API (in this case file-based)

logtf = logging.getLogger('DST')
logtf.setLevel(logging.DEBUG)

# file handler
logfile='/tmp/tf_s.log'
fh = logging.FileHandler(logfile)
fh.setFormatter( logging.Formatter('fh %(asctime)s %(name)s %(filename)s:%(lineno)d :%(message)s') )
logtf.addHandler(fh)
logtf.info('writing to %s', logfile)

I solved with this post Cannot remove all warnings #27045 , and the solution was:

import logging
logging.getLogger('tensorflow').disabled = True

참고URL : https://stackoverflow.com/questions/35911252/disable-tensorflow-debugging-information

반응형