Programing

Pandas read_csv 사용시 메모리 오류

lottogame 2020. 10. 13. 07:17
반응형

Pandas read_csv 사용시 메모리 오류


나는 큰 csv 파일을 pandas 데이터 프레임으로 읽는 매우 간단한 것을 시도하고 있습니다.

data = pandas.read_csv(filepath, header = 0, sep = DELIMITER,skiprows = 2)

코드는으로 실패 MemoryError하거나 완료되지 않습니다.

작업 관리자의 메모리 사용량은 506Mb에서 중지되었으며 5 분 동안 변경 사항이없고 프로세스에서 CPU 활동이 없으면 중지했습니다.

pandas 버전 0.11.0을 사용하고 있습니다.

파일 파서에 메모리 문제가 있었다는 것을 알고 있지만 http://wesmckinney.com/blog/?p=543 에 따르면 문제가 해결 되어야합니다.

내가 읽으려고하는 파일은 366 Mb이고, 파일을 짧게 (25 Mb) 줄이면 위의 코드가 작동합니다.

또한 0x1e0baf93 주소에 쓸 수 없다는 팝업이 나타납니다.

Stacktrace :

Traceback (most recent call last):
  File "F:\QA ALM\Python\new WIM data\new WIM data\new_WIM_data.py", line 25, in
 <module>
    wimdata = pandas.read_csv(filepath, header = 0, sep = DELIMITER,skiprows = 2
)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\io\parsers.py"
, line 401, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\io\parsers.py"
, line 216, in _read
    return parser.read()
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\io\parsers.py"
, line 643, in read
    df = DataFrame(col_dict, columns=columns, index=index)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\frame.py"
, line 394, in __init__
    mgr = self._init_dict(data, index, columns, dtype=dtype)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\frame.py"
, line 525, in _init_dict
    dtype=dtype)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\frame.py"
, line 5338, in _arrays_to_mgr
    return create_block_manager_from_arrays(arrays, arr_names, axes)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\internals
.py", line 1820, in create_block_manager_from_arrays
    blocks = form_blocks(arrays, names, axes)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\internals
.py", line 1872, in form_blocks
    float_blocks = _multi_blockify(float_items, items)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\internals
.py", line 1930, in _multi_blockify
    block_items, values = _stack_arrays(list(tup_block), ref_items, dtype)
  File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\internals
.py", line 1962, in _stack_arrays
    stacked = np.empty(shape, dtype=dtype)
MemoryError
Press any key to continue . . .

약간의 배경 지식-저는 사람들에게 파이썬이 R과 똑같이 할 수 있다는 것을 확신 시키려고 노력하고 있습니다.이를 위해 저는 R 스크립트를 복제하려고합니다.

data <- read.table(paste(INPUTDIR,config[i,]$TOEXTRACT,sep=""), HASHEADER, DELIMITER,skip=2,fill=TRUE)

R은 위의 파일을 잘 읽을 수있을뿐만 아니라 for 루프에서 이러한 파일을 여러 개 읽은 다음 데이터로 일부 작업을 수행합니다. 파이썬에 그 크기의 파일에 문제가 있다면 나는 잃어버린 전투와 싸울 수 있습니다 ...


Windows 메모리 제한

Windows에서 32 비트 버전을 사용할 때 파이썬에서 메모리 오류가 많이 발생합니다. 32 비트 프로세스 기본적으로 2GB의 메모리 만 사용 하기 때문 입니다.

메모리 사용량을 줄이는 방법

Windows에서 32 비트 파이썬을 사용하지 않지만 csv 파일을 읽는 동안 메모리 효율성을 향상시키려는 경우 트릭이 있습니다.

pandas.read_csv 기능이 라는 옵션을합니다 dtype. 이를 통해 팬더는 CSV 데이터에 어떤 유형이 있는지 알 수 있습니다.

작동 원리

기본적으로 pandas는 csv 파일에 어떤 dtypes가 있는지 추측하려고 시도합니다. 이는 dtype을 결정하는 동안 모든 원시 데이터를 메모리에 객체 (문자열)로 유지해야하기 때문에 매우 무거운 작업입니다.

CSV가 다음과 같다고 가정 해 보겠습니다.

name, age, birthday
Alice, 30, 1985-01-01
Bob, 35, 1980-01-01
Charlie, 25, 1990-01-01

이 예제는 물론 메모리로 읽는 데 문제가 없지만 단지 예제 일뿐입니다.

pandas가 dtype 옵션 없이 위의 csv 파일 읽으면 pandas가 정규화 된 추측을하기에 충분한 csv 파일 줄을 읽을 때까지 메모리에 문자열로 저장됩니다.

pandas의 기본값은 dtype을 추측하기 전에 1,000,000 행을 읽는 것입니다.

해결책

By specifying dtype={'age':int} as an option to the .read_csv() will let pandas know that age should be interpreted as a number. This saves you lots of memory.

Problem with corrupt data

However, if your csv file would be corrupted, like this:

name, age, birthday
Alice, 30, 1985-01-01
Bob, 35, 1980-01-01
Charlie, 25, 1990-01-01
Dennis, 40+, None-Ur-Bz

Then specifying dtype={'age':int} will break the .read_csv() command, because it cannot cast "40+" to int. So sanitize your data carefully!

Here you can see how the memory usage of a pandas dataframe is a lot higher when floats are kept as strings:

Try it yourself

df = pd.DataFrame(pd.np.random.choice(['1.0', '0.6666667', '150000.1'],(100000, 10)))
resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
# 224544 (~224 MB)

df = pd.DataFrame(pd.np.random.choice([1.0, 0.6666667, 150000.1],(100000, 10)))
resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
# 79560 (~79 MB)

I had the same memory problem with a simple read of a tab delimited text file around 1 GB in size (over 5.5 million records) and this solved the memory problem:

df = pd.read_csv(myfile,sep='\t') # didn't work, memory error
df = pd.read_csv(myfile,sep='\t',low_memory=False) # worked fine and in less than 30 seconds

Spyder 3.2.3 Python 2.7.13 64bits


I use Pandas on my Linux box and faced many memory leaks that only got resolved after upgrading Pandas to the latest version after cloning it from github.


There is no error for Pandas 0.12.0 and NumPy 1.8.0.

I have managed to create a big DataFrame and save it to a csv file and then successfully read it. Please see the example here. The size of the file is 554 Mb (It even worked for 1.1 Gb file, took longer, to generate 1.1Gb file use frequency of 30 seconds). Though I have 4Gb of RAM available.

My suggestion is try updating Pandas. Other thing that could be useful is try running your script from command line, because for R you are not using Visual Studio (this already was suggested in the comments to your question), hence it has more resources available.


I encountered this issue as well when I was running in a virtual machine, or somewere else where the memory is stricktly limited. It has nothing to do with pandas or numpy or csv, but will always happen if you try using more memory as you are alowed to use, not even only in python.

The only chance you have is what you already tried, try to chomp down the big thing into smaller pieces which fit into memory.

If you ever asked yourself what MapReduce is all about, you found out by yourself...MapReduce would try to distribute the chunks over many machines, you would try to process the chunke on one machine one after another.

What you found out with the concatenation of the chunk files might be an issue indeed, maybe there are some copy needed in this operation...but in the end this maybe saves you in your current situation but if your csv gets a little bit larger you might run against that wall again...

It also could be, that pandas is so smart, that it actually only loads the individual data chunks into memory if you do something with it, like concatenating to a big df?

Several things you can try:

  • Don't load all the data at once, but split in in pieces
  • As far as I know, hdf5 is able to do these chunks automatically and only loads the part your program currently works on
  • Look if the types are ok, a string '0.111111' needs more memory than a float
  • What do you need actually, if there is the adress as a string, you might not need it for numerical analysis...
  • A database can help acessing and loading only the parts you actually need (e.g. only the 1% active users)

I tried chunksize while reading big CSV file

reader = pd.read_csv(filePath,chunksize=1000000,low_memory=False,header=0)

The read is now the list. We can iterate the reader and write/append to the new csv or can perform any operation

for chunk in reader:
    print(newChunk.columns)
    print("Chunk -> File process")
    with open(destination, 'a') as f:
        newChunk.to_csv(f, header=False,sep='\t',index=False)
        print("Chunk appended to the file")

Although this is a workaround not so much as a fix, I'd try converting that CSV to JSON (should be trivial) and using read_json method instead - I've been writing and reading sizable JSON/dataframes (100s of MB) in Pandas this way without any problem at all.

참고URL : https://stackoverflow.com/questions/17557074/memory-error-when-using-pandas-read-csv

반응형