Skip to content Skip to sidebar Skip to footer

Too Many Open Files Error With Popen Of Subprocess

I'm using Python's subprocess module to call a command to write values from a file to memory. It looks like: import subprocess f = open('memdump', 'r') content = [line.split()[1]

Solution 1:

The reason is that subprocess.Popen call is asynchronous in that it returns immediately without waiting for the spawned process to exit. That said, you are quickly creating 2x256 processes. Each of them has 2 pipes each taking up 1 file descriptor. A single process can have a limited amount of file descriptors opened at any one time, and you are reaching it because you don't wait for the processes and pipes to close. You can either wait for them to exit, e.g. p.communicate() where p is the return value of subprocess.Popen, or increase the maximum file descriptors opened at once:

  • permamently - add fs.file-max = 100000 to /etc/sysctl.conf
  • temporarily (until reboot) - sysctl -w fs.file-max=100000

Solution 2:

With every Popen you create 6 new filehandles. Perhaps you can run the processes one by one, and use communicate instread of echo:

import subprocess
from itertools import count

JAGUAR = "jaguar instance=0; jaguar wr offset=0x%x value=%s"withopen('memdump', 'r') as f:
    content = [line.split()[1] for line in f]

for tbl_pt0, tpl_pt1, value inzip(count(0x4400,4), count(0x4800,4), content):
    subprocess.Popen(["pdt"], stdout=subprocess.PIPE, stderr=subprocess.PIPE,
        close_fds=True).communicate(JAGUAR % (tbl_pt0, value))
    subprocess.Popen(["pdt"], stdout=subprocess.PIPE, stderr=subprocess.PIPE,
        close_fds=True).communicate(JAGUAR % (tbl_pt1, value))

Post a Comment for "Too Many Open Files Error With Popen Of Subprocess"