Docker uses cgroup to limit the amount of memory accessible to the container. This is not perfect and a few examples of the problems can be seen with:
$ free -h # This is from a 1G instance, but reporting 2G
total used free shared buff/cache available
Mem: 1.9G 204M 1.5G 19M 231M 1.6G
Swap: 1.0G 607M 416M
This output will display the 'host' memory usage and not the pod. This applies equally to Python (and R) allocating memory - using /proc/meminfo instead of cgroups.
You can verify if your pod has crashed due to running out of memory. This may require support from your Domino Admin(s) to check on your run. Login to the Central Server/Rancher and run:
# kubectl get po -n <Namespace> |grep <runID>
run-<runId>-<uniqueID> 3/4 OOMKilled 0 23h
What can I do to lessen the impact?
You can avoid crashing your Workspace by setting soft and hard limits inside your Python application.
An adjusted example from helpful people on serverfault and Carlos Becker, demonstrates how you can limit your memory usage request to a percentage or a value of perceived free memory based on cgroups.
By setting the soft and hard limits, the Docker container should remain running although your application will crash.
with open('/sys/fs/cgroup/memory/memory.limit_in_bytes') as limit:
mem = int(limit.read())
resource.setrlimit(resource.RLIMIT_AS, (mem, mem))
# The following is just an test of the functionality. In order to use the soft/hard
# limit in your code, you only need to use the lines ABOVE.
print ( resource.getrlimit( resource.RLIMIT_AS ) )
MAXMEM,HARDLIMIT=resource.getrlimit( resource.RLIMIT_AS )
with open('/sys/fs/cgroup/memory/memory.usage_in_bytes') as memused:
memusedinbytes = int(memused.read() )
# print ( MAXMEM, memusedinbytes )
# x = bytearray(900*1024*1024) # allocate 900mb
# x = bytearray(int(MAXMEM*0.97)) # allocate 97% of reported soft limit
# The following can easily FAIL due to the 'fuzz' including Buffer space in use
# x = bytearray( MAXMEM - memusedinbytes )
estimatedFree = MAXMEM - memusedinbytes
x = bytearray ( int( estimatedFree * 0.95 ))
Once the Soft and Hard limits are set - your python script will return either MemoryError or Killed. The following example is the above script with a 'failure' and 'success' execution:
This should run on both python2 and python3 and these two examples are 'overallocated' and success:
# python3 memory.py
Traceback (most recent call last):
File "memory.py", line 20, in <module>
x = bytearray(1200*1024*1024) # allocate 1200mb
# python2 memory.py