I tried to check for memory leaks in a bunch of functions of mine using a simple decorator. It works, but it fails with this code, returning a random count_diff at every run. Why?
import tracemalloc
import gc
import functools
from uuid import uuid4
import pickle
def getUuid():
return str(uuid4())
def trace(func):
@functools.wraps(func)
def inner():
tracemalloc.start()
snapshot1 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)
for i in range(100):
func()
gc.collect()
snapshot2 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)
top_stats = snapshot2.compare_to(snapshot1, 'lineno')
tracemalloc.stop()
for stat in top_stats:
if stat.count_diff > 3:
raise ValueError(f"count_diff: {stat.count_diff}")
return inner
dict_1 = {getUuid(): i for i in range(1000)}
@trace
def func_76():
pickle.dumps(iter(dict_1))
func_76()
It's something to do with pickling iterators because it still occurs
when I reduce func_76 to:
@trace
def func_76():
pickle.dumps(iter([]))
I've done this other simple test:If it was a leak, then the amount of memory used or the counts would
#!/usr/bin/env python3
import tracemalloc
import gc
import pickle
tracemalloc.start()
snapshot1 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)
for i in range(10000000):
pickle.dumps(iter([]))
gc.collect()
snapshot2 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)
top_stats = snapshot2.compare_to(snapshot1, 'lineno')
tracemalloc.stop()
for stat in top_stats:
print(stat)
The result is:
/home/marco/sources/test.py:14: size=3339 B (+3339 B), count=63 (+63), average=53 B
/home/marco/sources/test.py:9: size=464 B (+464 B), count=1 (+1),
average=464 B
/home/marco/sources/test.py:10: size=456 B (+456 B), count=1 (+1), average=456 B
/home/marco/sources/test.py:13: size=28 B (+28 B), count=1 (+1),
average=28 B
It seems that, after 10 million loops, only 63 have a leak, with only
~3 KB. It seems to me that we can't call it a leak, no? Probably
pickle needs a lot more cycles to be sure there's actually a real leakage.
On 21 Jul 2022, at 21:54, Marco Sulla <Marco.Sulla.Python@gmail.com> wrote: On Thu, 21 Jul 2022 at 22:28, MRAB <python@mrabarnett.plus.com> wrote:
It's something to do with pickling iterators because it still occurs
when I reduce func_76 to:
@trace
def func_76():
pickle.dumps(iter([]))
It's too strange. I found a bunch of true memory leaks with this
decorator. It seems to be reliable. It's correct with pickle and with
iter, but not when pickling iters.
--
https://mail.python.org/mailman/listinfo/python-list
With code as complex as python’s there will be memory allocations thatoccur that will not be directly related to the python code you test.
To put it another way there is noise in your memory allocation signal.
Usually the signal of a memory leak is very clear, as you noticed.
For rare leaks I would use a tool like valgrind.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 63:03:39 |
Calls: | 6,654 |
Files: | 12,200 |
Messages: | 5,331,693 |