You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems like the engines use a lot of "static" memory, i.e. when we start the engines, they take up noticeable memory even when nothing has been executed. In our testing, we started 16 engines across 2 nodes, and as soon as we did, the memory usage on each node went from ~150MB to ~1.2GB, i.e. it seems like 8 engines consume ~1.05GB static memory. This is of course both the engine and nanny process, but based on htop , almost all of it seems to come from the engine process. ~134MB static memory usage per engine seems a bit high. Is there a way we could reduce this?
The text was updated successfully, but these errors were encountered:
I imagine this is mostly attributable to imports. I can run some import profiling to see if there are any that can be delayed to avoid unnecessary usage.
I suspect the biggest source is imports in the top-level ipyparallel.__init__ that may not be used, and we can try to optimize those out.
Instantiating IPython itself uses quite a bit, so I'm not sure how much that can be brought down without IPython's help. Possibly turning off some features.
It may be worth bringing up IPython's own memory usage over there.
It seems like the engines use a lot of "static" memory, i.e. when we start the engines, they take up noticeable memory even when nothing has been executed. In our testing, we started 16 engines across 2 nodes, and as soon as we did, the memory usage on each node went from ~150MB to ~1.2GB, i.e. it seems like 8 engines consume ~1.05GB static memory. This is of course both the engine and nanny process, but based on
htop
, almost all of it seems to come from the engine process. ~134MB static memory usage per engine seems a bit high. Is there a way we could reduce this?The text was updated successfully, but these errors were encountered: