summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPaul Gortmaker <paul.gortmaker@windriver.com>2018-07-24 12:19:40 -0400
committerPaul Gortmaker <paul.gortmaker@windriver.com>2018-07-24 12:19:40 -0400
commitf6691b73f3ce22360ccec869a6191e8a0fcc16dd (patch)
tree09388e6237d22947db67b60cac04a9293937d9d7
parent262b8b63d2af08dafd5124c12f4b56d06afcdf3f (diff)
downloadlongterm-queue-4.12-f6691b73f3ce22360ccec869a6191e8a0fcc16dd.tar.gz
sunrpc: drop patch that breaks build
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
-rw-r--r--queue/series1
-rw-r--r--queue/xprtrdma-Don-t-defer-fencing-an-async-RPC-s-chunks.patch45
2 files changed, 0 insertions, 46 deletions
diff --git a/queue/series b/queue/series
index 2f0631f..9cee801 100644
--- a/queue/series
+++ b/queue/series
@@ -105,7 +105,6 @@ scsi-sd-change-allow_restart-to-bool-in-sysfs-interf.patch
scsi-bfa-integer-overflow-in-debugfs.patch
raid5-ppl-check-recovery_offset-when-performing-ppl-.patch
md-cluster-fix-wrong-condition-check-in-raid1_write_.patch
-xprtrdma-Don-t-defer-fencing-an-async-RPC-s-chunks.patch
udf-Avoid-overflow-when-session-starts-at-large-offs.patch
macvlan-Only-deliver-one-copy-of-the-frame-to-the-ma.patch
RDMA-cma-Avoid-triggering-undefined-behavior.patch
diff --git a/queue/xprtrdma-Don-t-defer-fencing-an-async-RPC-s-chunks.patch b/queue/xprtrdma-Don-t-defer-fencing-an-async-RPC-s-chunks.patch
deleted file mode 100644
index 6d5ae8f..0000000
--- a/queue/xprtrdma-Don-t-defer-fencing-an-async-RPC-s-chunks.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From 951450729009a1a7de11051acd6e4cf66206378c Mon Sep 17 00:00:00 2001
-From: Chuck Lever <chuck.lever@oracle.com>
-Date: Mon, 9 Oct 2017 12:03:26 -0400
-Subject: [PATCH] xprtrdma: Don't defer fencing an async RPC's chunks
-
-commit 8f66b1a529047a972cb9602a919c53a95f3d7a2b upstream.
-
-In current kernels, waiting in xprt_release appears to be safe to
-do. I had erroneously believed that for ASYNC RPCs, waiting of any
-kind in xprt_release->xprt_rdma_free would result in deadlock. I've
-done injection testing and consulted with Trond to confirm that
-waiting in the RPC release path is safe.
-
-For the very few times where RPC resources haven't yet been released
-earlier by the reply handler, it is safe to wait synchronously in
-xprt_rdma_free for invalidation rather than defering it to MR
-recovery.
-
-Note: When the QP is error state, posting a LocalInvalidate should
-flush and mark the MR as bad. There is no way the remote HCA can
-access that MR via a QP in error state, so it is effectively already
-inaccessible and thus safe for the Upper Layer to access. The next
-time the MR is used it should be recognized and cleaned up properly
-by frwr_op_map.
-
-Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
-
-diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
-index 62ecbccd9748..573aab1895f3 100644
---- a/net/sunrpc/xprtrdma/transport.c
-+++ b/net/sunrpc/xprtrdma/transport.c
-@@ -685,7 +685,7 @@ xprt_rdma_free(struct rpc_task *task)
- dprintk("RPC: %s: called on 0x%p\n", __func__, req->rl_reply);
-
- if (unlikely(!list_empty(&req->rl_registered)))
-- ia->ri_ops->ro_unmap_safe(r_xprt, req, !RPC_IS_ASYNC(task));
-+ ia->ri_ops->ro_unmap_sync(r_xprt, &req->rl_registered);
- rpcrdma_unmap_sges(ia, req);
- rpcrdma_buffer_put(req);
- }
---
-2.15.0
-