Skip to content Skip to sidebar Skip to footer

Pandas Memory Error

I have a csv file with ~50,000 rows and 300 columns. Performing the following operation is causing a memory error in Pandas (python): merged_df.stack(0).reset_index(1) The data fr

Solution 1:

So it takes on my 64-bit linux (32GB) memory, a little less than 2GB.

In [5]: def f():
       df = DataFrame(np.random.randn(50000,300))
       df.stack().reset_index(1)


In [6]: %memit f()
maximum of 1: 1791.054688 MB per loop

Since you didn't specify. This won't work on 32-bit at all (as you can't usually allocate a 2GB contiguous block), but should work if you have reasonable swap / memory.

Solution 2:

As an alternative approach you can use the library "dask" e.g:

# Dataframes implement the Pandas API
import dask.dataframe as dd`<br>
df = dd.read_csv('s3://.../2018-*-*.csv')

Post a Comment for "Pandas Memory Error"