diff options
author | Michael Ellerman <michael@ellerman.id.au> | 2005-01-14 23:21:42 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-01-14 23:21:42 -0800 |
commit | 7d033a996cd7cfefd8e9abbb0b61db85bb71e806 (patch) | |
tree | 47bcdc7981c72d7b3da58cb20a022882505b84bf /drivers | |
parent | 451321d6023ff61e143adb9743d31571682e445b (diff) | |
download | history-7d033a996cd7cfefd8e9abbb0b61db85bb71e806.tar.gz |
[PATCH] ppc64: make iseries_veth call flush_scheduled_work()
When the iseries_veth driver module is unloaded there is the potential for an
oops and also some memory leakage.
Because the HvLpEvent_unregisterHandler() function did no synchronisation,
it was possible for the handler that was being unregistered to be running
on another CPU *after* HvLpEvent_unregisterHandler() had returned. This
could cause the iseries_veth driver to leave work in the events work queue
after the module had been unloaded. When that work was eventually executed
we got an oops.
In addition some of the data structures in the iseries_veth driver were not
being correctly freed when the module was unloaded.
This is the second patch, we make iseries_veth call flush_scheduled_work()
after we are sure the handler is no longer running, and also fix the memory
leaks.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'drivers')
-rw-r--r-- | drivers/net/iseries_veth.c | 26 |
1 files changed, 22 insertions, 4 deletions
diff --git a/drivers/net/iseries_veth.c b/drivers/net/iseries_veth.c index 43ef7951eac0b4..855f8b2cf13b6f 100644 --- a/drivers/net/iseries_veth.c +++ b/drivers/net/iseries_veth.c @@ -642,7 +642,7 @@ static int veth_init_connection(u8 rlp) return 0; } -static void veth_destroy_connection(u8 rlp) +static void veth_stop_connection(u8 rlp) { struct veth_lpar_connection *cnx = veth_cnx[rlp]; @@ -671,9 +671,18 @@ static void veth_destroy_connection(u8 rlp) HvLpEvent_Type_VirtualLan, cnx->num_ack_events, NULL, NULL); +} + +static void veth_destroy_connection(u8 rlp) +{ + struct veth_lpar_connection *cnx = veth_cnx[rlp]; + + if (! cnx) + return; - if (cnx->msgs) - kfree(cnx->msgs); + kfree(cnx->msgs); + kfree(cnx); + veth_cnx[rlp] = NULL; } /* @@ -1375,9 +1384,18 @@ void __exit veth_module_cleanup(void) vio_unregister_driver(&veth_driver); for (i = 0; i < HVMAXARCHITECTEDLPS; ++i) - veth_destroy_connection(i); + veth_stop_connection(i); HvLpEvent_unregisterHandler(HvLpEvent_Type_VirtualLan); + + /* Hypervisor callbacks may have scheduled more work while we + * were destroying connections. Now that we've disconnected from + * the hypervisor make sure everything's finished. */ + flush_scheduled_work(); + + for (i = 0; i < HVMAXARCHITECTEDLPS; ++i) + veth_destroy_connection(i); + } module_exit(veth_module_cleanup); |