On 1 Oct 2017, at 15.25, Rakesh Pandit <rakesh@tuxera.com> wrote:
While separating read and erase mempools in 22da65a1b pblk_g_rq_cache
was used two times to set aside memory both for erase and read
requests. Because same kmem cache is used repeatedly a single call to kmem_cache_destroy wouldn't deallocate everything. Repeatedly doing
loading and unloading of pblk modules would eventually result in some
leak.
The fix is to really use separate kmem cache and track it
appropriately.
Fixes: 22da65a1b ("lightnvm: pblk: decouple read/erase mempools") Signed-off-by: Rakesh Pandit <rakesh@tuxera.com>
On 1 Oct 2017, at 15.25, Rakesh Pandit <rakesh@tuxera.com> wrote:
While separating read and erase mempools in 22da65a1b pblk_g_rq_cache
was used two times to set aside memory both for erase and read
requests. Because same kmem cache is used repeatedly a single call to kmem_cache_destroy wouldn't deallocate everything. Repeatedly doing loading and unloading of pblk modules would eventually result in some
leak.
The fix is to really use separate kmem cache and track it
appropriately.
Fixes: 22da65a1b ("lightnvm: pblk: decouple read/erase mempools") Signed-off-by: Rakesh Pandit <rakesh@tuxera.com>
I'm not sure I follow this logic. I assume that you're thinking of the refcount on kmem_cache. During cache creation, all is good; if a
different cache creation fails, destruction is guaranteed, since the
refcount is 0. On tear down (pblk_core_free), we destroy the mempools associated to the caches. In this case, the refcount goes to 0 too, as
we destroy the 2 mempools. So I don't see where the leak can happen. Am
I missing something?
In any case, Jens reported some bugs on the mempools, where we did not guarantee forward progress. Here you can find the original discussion
and the mempool audit [1]. Would be good if you reviewed these.
[1] https://www.spinics.net/lists/kernel/msg2602274.html
On Mon, Oct 02, 2017 at 02:09:35PM +0200, Javier González wrote:
On 1 Oct 2017, at 15.25, Rakesh Pandit <rakesh@tuxera.com> wrote:
While separating read and erase mempools in 22da65a1b pblk_g_rq_cache
was used two times to set aside memory both for erase and read
requests. Because same kmem cache is used repeatedly a single call to kmem_cache_destroy wouldn't deallocate everything. Repeatedly doing loading and unloading of pblk modules would eventually result in some leak.
The fix is to really use separate kmem cache and track it
appropriately.
Fixes: 22da65a1b ("lightnvm: pblk: decouple read/erase mempools") Signed-off-by: Rakesh Pandit <rakesh@tuxera.com>
I'm not sure I follow this logic. I assume that you're thinking of the refcount on kmem_cache. During cache creation, all is good; if a
different cache creation fails, destruction is guaranteed, since the refcount is 0. On tear down (pblk_core_free), we destroy the mempools associated to the caches. In this case, the refcount goes to 0 too, as
we destroy the 2 mempools. So I don't see where the leak can happen. Am
I missing something?
In any case, Jens reported some bugs on the mempools, where we did not guarantee forward progress. Here you can find the original discussion
and the mempool audit [1]. Would be good if you reviewed these.
[1] https://www.spinics.net/lists/kernel/msg2602274.html
Thanks, yes makes sense to follow up in patch thread. I will respond
to above questions there later today.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 40:02:30 |
Calls: | 6,648 |
Files: | 12,193 |
Messages: | 5,329,412 |