sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget,/translations/zh_CN/driver-api/md/md-clustermodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget,/translations/zh_TW/driver-api/md/md-clustermodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget,/translations/it_IT/driver-api/md/md-clustermodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget,/translations/ja_JP/driver-api/md/md-clustermodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget,/translations/ko_KR/driver-api/md/md-clustermodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget,/translations/sp_SP/driver-api/md/md-clustermodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhsection)}(hhh](htitle)}(h MD Clusterh]h MD Cluster}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhF/var/lib/git/docbuild/linux/Documentation/driver-api/md/md-cluster.rsthKubh paragraph)}(hqThe cluster MD is a shared-device RAID for a cluster, it supports two levels: raid1 and raid10 (limited support).h]hqThe cluster MD is a shared-device RAID for a cluster, it supports two levels: raid1 and raid10 (limited support).}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hhh](h)}(h1. On-disk formath]h1. On-disk format}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhK ubh)}(hSeparate write-intent-bitmaps are used for each cluster node. The bitmaps record all writes that may have been started on that node, and may not yet have finished. The on-disk layout is::h]hSeparate write-intent-bitmaps are used for each cluster node. The bitmaps record all writes that may have been started on that node, and may not yet have finished. The on-disk layout is:}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hhhhubh literal_block)}(hX0 4k 8k 12k ------------------------------------------------------------------- | idle | md super | bm super [0] + bits | | bm bits[0, contd] | bm super[1] + bits | bm bits[1, contd] | | bm super[2] + bits | bm bits [2, contd] | bm super[3] + bits | | bm bits [3, contd] | | |h]hX0 4k 8k 12k ------------------------------------------------------------------- | idle | md super | bm super [0] + bits | | bm bits[0, contd] | bm super[1] + bits | bm bits[1, contd] | | bm super[2] + bits | bm bits [2, contd] | bm super[3] + bits | | bm bits [3, contd] | | |}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhKhhhhubh)}(hDuring "normal" functioning we assume the filesystem ensures that only one node writes to any given block at a time, so a write request willh]hDuring “normal” functioning we assume the filesystem ensures that only one node writes to any given block at a time, so a write request will}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh block_quote)}(h- set the appropriate bit (if not already set) - commit the write to all mirrors - schedule the bit to be cleared after a timeout. h]h bullet_list)}(hhh](h list_item)}(h,set the appropriate bit (if not already set)h]h)}(hjh]h,set the appropriate bit (if not already set)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hcommit the write to all mirrorsh]h)}(hj,h]hcommit the write to all mirrors}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj*ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h0schedule the bit to be cleared after a timeout. h]h)}(h/schedule the bit to be cleared after a timeout.h]h/schedule the bit to be cleared after a timeout.}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjAubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]bullet-uh1j hhhKhjubah}(h]h ]h"]h$]h&]uh1jhhhKhhhhubh)}(hReads are just handled normally. It is up to the filesystem to ensure one node doesn't read from a location where another node (or the same node) is writing.h]hReads are just handled normally. It is up to the filesystem to ensure one node doesn’t read from a location where another node (or the same node) is writing.}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubeh}(h]on-disk-formatah ]h"]1. on-disk formatah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(h2. DLM Locks for managementh]h2. DLM Locks for management}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj}hhhhhK$ubh)}(h8There are three groups of locks for managing the device:h]h8There are three groups of locks for managing the device:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK&hj}hhubh)}(hhh](h)}(h%2.1 Bitmap lock resource (bm_lockres)h]h%2.1 Bitmap lock resource (bm_lockres)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK)ubj)}(hXThe bm_lockres protects individual node bitmaps. They are named in the form bitmap000 for node 1, bitmap001 for node 2 and so on. When a node joins the cluster, it acquires the lock in PW mode and it stays so during the lifetime the node is part of the cluster. The lock resource number is based on the slot number returned by the DLM subsystem. Since DLM starts node count from one and bitmap slots start from zero, one is subtracted from the DLM slot number to arrive at the bitmap slot number. The LVB of the bitmap lock for a particular node records the range of sectors that are being re-synced by that node. No other node may write to those sectors. This is used when a new nodes joins the cluster. h](h)}(hXThe bm_lockres protects individual node bitmaps. They are named in the form bitmap000 for node 1, bitmap001 for node 2 and so on. When a node joins the cluster, it acquires the lock in PW mode and it stays so during the lifetime the node is part of the cluster. The lock resource number is based on the slot number returned by the DLM subsystem. Since DLM starts node count from one and bitmap slots start from zero, one is subtracted from the DLM slot number to arrive at the bitmap slot number.h]hXThe bm_lockres protects individual node bitmaps. They are named in the form bitmap000 for node 1, bitmap001 for node 2 and so on. When a node joins the cluster, it acquires the lock in PW mode and it stays so during the lifetime the node is part of the cluster. The lock resource number is based on the slot number returned by the DLM subsystem. Since DLM starts node count from one and bitmap slots start from zero, one is subtracted from the DLM slot number to arrive at the bitmap slot number.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK+hjubh)}(hThe LVB of the bitmap lock for a particular node records the range of sectors that are being re-synced by that node. No other node may write to those sectors. This is used when a new nodes joins the cluster.h]hThe LVB of the bitmap lock for a particular node records the range of sectors that are being re-synced by that node. No other node may write to those sectors. This is used when a new nodes joins the cluster.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK4hjubeh}(h]h ]h"]h$]h&]uh1jhhhK+hjhhubeh}(h]bitmap-lock-resource-bm-lockresah ]h"]%2.1 bitmap lock resource (bm_lockres)ah$]h&]uh1hhj}hhhhhK)ubh)}(hhh](h)}(h2.2 Message passing locksh]h2.2 Message passing locks}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK:ubj)}(hX Each node has to communicate with other nodes when starting or ending resync, and for metadata superblock updates. This communication is managed through three locks: "token", "message", and "ack", together with the Lock Value Block (LVB) of one of the "message" lock. h]h)}(hX Each node has to communicate with other nodes when starting or ending resync, and for metadata superblock updates. This communication is managed through three locks: "token", "message", and "ack", together with the Lock Value Block (LVB) of one of the "message" lock.h]hXEach node has to communicate with other nodes when starting or ending resync, and for metadata superblock updates. This communication is managed through three locks: “token”, “message”, and “ack”, together with the Lock Value Block (LVB) of one of the “message” lock.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK lock of the failed node - opens the bitmap - reads the bitmap of the failed node - copies the set bitmap to local node - cleans the bitmap of the failed node - releases bitmap lock of the failed node - initiates resync of the bitmap on the current node md_check_recovery is invoked within recover_bitmaps, then md_check_recovery -> metadata_update_start/finish, it will lock the communication by lock_comm. Which means when one node is resyncing it blocks all other nodes from writing anywhere on the array. The resync process is the regular md resync. However, in a clustered environment when a resync is performed, it needs to tell other nodes of the areas which are suspended. Before a resync starts, the node send out RESYNCING with the (lo,hi) range of the area which needs to be suspended. Each node maintains a suspend_list, which contains the list of ranges which are currently suspended. On receiving RESYNCING, the node adds the range to the suspend_list. Similarly, when the node performing resync finishes, it sends RESYNCING with an empty range to other nodes and other nodes remove the corresponding entry from the suspend_list. A helper function, ->area_resyncing() can be used to check if a particular I/O range should be suspended or not. h](h)}(hWhen a node fails, the DLM informs the cluster with the slot number. The node starts a cluster recovery thread. The cluster recovery thread:h]hWhen a node fails, the DLM informs the cluster with the slot number. The node starts a cluster recovery thread. The cluster recovery thread:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubj)}(hX,- acquires the bitmap lock of the failed node - opens the bitmap - reads the bitmap of the failed node - copies the set bitmap to local node - cleans the bitmap of the failed node - releases bitmap lock of the failed node - initiates resync of the bitmap on the current node md_check_recovery is invoked within recover_bitmaps, then md_check_recovery -> metadata_update_start/finish, it will lock the communication by lock_comm. Which means when one node is resyncing it blocks all other nodes from writing anywhere on the array. h]j )}(hhh](j)}(h3acquires the bitmap lock of the failed nodeh]h)}(hjh]h3acquires the bitmap lock of the failed node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hopens the bitmaph]h)}(hjh]hopens the bitmap}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h#reads the bitmap of the failed nodeh]h)}(hjh]h#reads the bitmap of the failed node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h#copies the set bitmap to local nodeh]h)}(hjh]h#copies the set bitmap to local node}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h$cleans the bitmap of the failed nodeh]h)}(hj2h]h$cleans the bitmap of the failed node}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj0ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h/releases bitmap lock of the failed nodeh]h)}(hjIh]h/releases bitmap lock of the failed node}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjGubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hX2initiates resync of the bitmap on the current node md_check_recovery is invoked within recover_bitmaps, then md_check_recovery -> metadata_update_start/finish, it will lock the communication by lock_comm. Which means when one node is resyncing it blocks all other nodes from writing anywhere on the array. h]h)}(hX1initiates resync of the bitmap on the current node md_check_recovery is invoked within recover_bitmaps, then md_check_recovery -> metadata_update_start/finish, it will lock the communication by lock_comm. Which means when one node is resyncing it blocks all other nodes from writing anywhere on the array.h]hX1initiates resync of the bitmap on the current node md_check_recovery is invoked within recover_bitmaps, then md_check_recovery -> metadata_update_start/finish, it will lock the communication by lock_comm. Which means when one node is resyncing it blocks all other nodes from writing anywhere on the array.}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj^ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]j_j`uh1j hhhKhjubah}(h]h ]h"]h$]h&]uh1jhhhKhjubh)}(hXzThe resync process is the regular md resync. However, in a clustered environment when a resync is performed, it needs to tell other nodes of the areas which are suspended. Before a resync starts, the node send out RESYNCING with the (lo,hi) range of the area which needs to be suspended. Each node maintains a suspend_list, which contains the list of ranges which are currently suspended. On receiving RESYNCING, the node adds the range to the suspend_list. Similarly, when the node performing resync finishes, it sends RESYNCING with an empty range to other nodes and other nodes remove the corresponding entry from the suspend_list.h]hXzThe resync process is the regular md resync. However, in a clustered environment when a resync is performed, it needs to tell other nodes of the areas which are suspended. Before a resync starts, the node send out RESYNCING with the (lo,hi) range of the area which needs to be suspended. Each node maintains a suspend_list, which contains the list of ranges which are currently suspended. On receiving RESYNCING, the node adds the range to the suspend_list. Similarly, when the node performing resync finishes, it sends RESYNCING with an empty range to other nodes and other nodes remove the corresponding entry from the suspend_list.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh)}(hpA helper function, ->area_resyncing() can be used to check if a particular I/O range should be suspended or not.h]hpA helper function, ->area_resyncing() can be used to check if a particular I/O range should be suspended or not.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjhhubeh}(h] node-failureah ]h"]4.1 node failureah$]h&]uh1hhjhhhhhKubeh}(h]handling-failuresah ]h"]4. handling failuresah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h4.2 Device Failureh]h4.2 Device Failure}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj)}(hDevice failures are handled and communicated with the metadata update routine. When a node detects a device failure it does not allow any further writes to that device until the failure has been acknowledged by all other nodes. h]h)}(hDevice failures are handled and communicated with the metadata update routine. When a node detects a device failure it does not allow any further writes to that device until the failure has been acknowledged by all other nodes.h]hDevice failures are handled and communicated with the metadata update routine. When a node detects a device failure it does not allow any further writes to that device until the failure has been acknowledged by all other nodes.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhhhKhjhhubh)}(hhh](h)}(h5. Adding a new Deviceh]h5. Adding a new Device}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubj)}(hXFor adding a new device, it is necessary that all nodes "see" the new device to be added. For this, the following algorithm is used: 1. Node 1 issues mdadm --manage /dev/mdX --add /dev/sdYY which issues ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CLUSTER_ADD) 2. Node 1 sends a NEWDISK message with uuid and slot number 3. Other nodes issue kobject_uevent_env with uuid and slot number (Steps 4,5 could be a udev rule) 4. In userspace, the node searches for the disk, perhaps using blkid -t SUB_UUID="" 5. Other nodes issue either of the following depending on whether the disk was found: ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CANDIDATE and disc.number set to slot number) ioctl(CLUSTERED_DISK_NACK) 6. Other nodes drop lock on "no-new-devs" (CR) if device is found 7. Node 1 attempts EX lock on "no-new-dev" 8. If node 1 gets the lock, it sends METADATA_UPDATED after unmarking the disk as SpareLocal 9. If not (get "no-new-dev" lock), it fails the operation and sends METADATA_UPDATED. 10. Other nodes get the information whether a disk is added or not by the following METADATA_UPDATED. h](h)}(hFor adding a new device, it is necessary that all nodes "see" the new device to be added. For this, the following algorithm is used:h]hFor adding a new device, it is necessary that all nodes “see” the new device to be added. For this, the following algorithm is used:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubj)}(hX1. Node 1 issues mdadm --manage /dev/mdX --add /dev/sdYY which issues ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CLUSTER_ADD) 2. Node 1 sends a NEWDISK message with uuid and slot number 3. Other nodes issue kobject_uevent_env with uuid and slot number (Steps 4,5 could be a udev rule) 4. In userspace, the node searches for the disk, perhaps using blkid -t SUB_UUID="" 5. Other nodes issue either of the following depending on whether the disk was found: ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CANDIDATE and disc.number set to slot number) ioctl(CLUSTERED_DISK_NACK) 6. Other nodes drop lock on "no-new-devs" (CR) if device is found 7. Node 1 attempts EX lock on "no-new-dev" 8. If node 1 gets the lock, it sends METADATA_UPDATED after unmarking the disk as SpareLocal 9. If not (get "no-new-dev" lock), it fails the operation and sends METADATA_UPDATED. 10. Other nodes get the information whether a disk is added or not by the following METADATA_UPDATED. h]j)}(hhh](j)}(hNode 1 issues mdadm --manage /dev/mdX --add /dev/sdYY which issues ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CLUSTER_ADD)h]h)}(hNode 1 issues mdadm --manage /dev/mdX --add /dev/sdYY which issues ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CLUSTER_ADD)h]hNode 1 issues mdadm --manage /dev/mdX --add /dev/sdYY which issues ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CLUSTER_ADD)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h8Node 1 sends a NEWDISK message with uuid and slot numberh]h)}(hj!h]h8Node 1 sends a NEWDISK message with uuid and slot number}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h_Other nodes issue kobject_uevent_env with uuid and slot number (Steps 4,5 could be a udev rule)h]h)}(h_Other nodes issue kobject_uevent_env with uuid and slot number (Steps 4,5 could be a udev rule)h]h_Other nodes issue kobject_uevent_env with uuid and slot number (Steps 4,5 could be a udev rule)}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj6ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hPIn userspace, the node searches for the disk, perhaps using blkid -t SUB_UUID=""h]h)}(hPIn userspace, the node searches for the disk, perhaps using blkid -t SUB_UUID=""h]hTIn userspace, the node searches for the disk, perhaps using blkid -t SUB_UUID=””}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjNubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hOther nodes issue either of the following depending on whether the disk was found: ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CANDIDATE and disc.number set to slot number) ioctl(CLUSTERED_DISK_NACK)h]h)}(hOther nodes issue either of the following depending on whether the disk was found: ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CANDIDATE and disc.number set to slot number) ioctl(CLUSTERED_DISK_NACK)h]hOther nodes issue either of the following depending on whether the disk was found: ioctl(ADD_NEW_DISK with disc.state set to MD_DISK_CANDIDATE and disc.number set to slot number) ioctl(CLUSTERED_DISK_NACK)}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjfubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h>Other nodes drop lock on "no-new-devs" (CR) if device is foundh]h)}(hjh]hBOther nodes drop lock on “no-new-devs” (CR) if device is found}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj~ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h'Node 1 attempts EX lock on "no-new-dev"h]h)}(hjh]h+Node 1 attempts EX lock on “no-new-dev”}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hYIf node 1 gets the lock, it sends METADATA_UPDATED after unmarking the disk as SpareLocalh]h)}(hYIf node 1 gets the lock, it sends METADATA_UPDATED after unmarking the disk as SpareLocalh]hYIf node 1 gets the lock, it sends METADATA_UPDATED after unmarking the disk as SpareLocal}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hRIf not (get "no-new-dev" lock), it fails the operation and sends METADATA_UPDATED.h]h)}(hRIf not (get "no-new-dev" lock), it fails the operation and sends METADATA_UPDATED.h]hVIf not (get “no-new-dev” lock), it fails the operation and sends METADATA_UPDATED.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hbOther nodes get the information whether a disk is added or not by the following METADATA_UPDATED. h]h)}(haOther nodes get the information whether a disk is added or not by the following METADATA_UPDATED.h]haOther nodes get the information whether a disk is added or not by the following METADATA_UPDATED.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]jvjwjxhjyjzuh1jhjubah}(h]h ]h"]h$]h&]uh1jhhhMhjubeh}(h]h ]h"]h$]h&]uh1jhhhMhjhhubeh}(h]adding-a-new-deviceah ]h"]5. adding a new deviceah$]h&]uh1hhjhhhhhMubeh}(h]device-failureah ]h"]4.2 device failureah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h6. Module interfaceh]h6. Module interface}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubj)}(hThere are 17 call-backs which the md core can make to the cluster module. Understanding these can give a good overview of the whole process. h]h)}(hThere are 17 call-backs which the md core can make to the cluster module. Understanding these can give a good overview of the whole process.h]hThere are 17 call-backs which the md core can make to the cluster module. Understanding these can give a good overview of the whole process.}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj'ubah}(h]h ]h"]h$]h&]uh1jhhhMhjhhubh)}(hhh](h)}(h6.1 join(nodes) and leave()h]h6.1 join(nodes) and leave()}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj?hhhhhM"ubj)}(hThese are called when an array is started with a clustered bitmap, and when the array is stopped. join() ensures the cluster is available and initializes the various resources. Only the first 'nodes' nodes in the cluster can use the array. h]h)}(hThese are called when an array is started with a clustered bitmap, and when the array is stopped. join() ensures the cluster is available and initializes the various resources. Only the first 'nodes' nodes in the cluster can use the array.h]hThese are called when an array is started with a clustered bitmap, and when the array is stopped. join() ensures the cluster is available and initializes the various resources. Only the first ‘nodes’ nodes in the cluster can use the array.}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM$hjPubah}(h]h ]h"]h$]h&]uh1jhhhM$hj?hhubeh}(h]join-nodes-and-leaveah ]h"]6.1 join(nodes) and leave()ah$]h&]uh1hhjhhhhhM"ubh)}(hhh](h)}(h6.2 slot_number()h]h6.2 slot_number()}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hhjphhhhhM*ubj)}(h[Reports the slot number advised by the cluster infrastructure. Range is from 0 to nodes-1. h]h)}(hZReports the slot number advised by the cluster infrastructure. Range is from 0 to nodes-1.h]hZReports the slot number advised by the cluster infrastructure. Range is from 0 to nodes-1.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM,hjubah}(h]h ]h"]h$]h&]uh1jhhhM,hjphhubeh}(h] slot-numberah ]h"]6.2 slot_number()ah$]h&]uh1hhjhhhhhM*ubh)}(hhh](h)}(h6.3 resync_info_update()h]h6.3 resync_info_update()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM0ubj)}(hThis updates the resync range that is stored in the bitmap lock. The starting point is updated as the resync progresses. The end point is always the end of the array. It does *not* send a RESYNCING message. h]h)}(hThis updates the resync range that is stored in the bitmap lock. The starting point is updated as the resync progresses. The end point is always the end of the array. It does *not* send a RESYNCING message.h](hThis updates the resync range that is stored in the bitmap lock. The starting point is updated as the resync progresses. The end point is always the end of the array. It does }(hjhhhNhNubhemphasis)}(h*not*h]hnot}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh send a RESYNCING message.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM2hjubah}(h]h ]h"]h$]h&]uh1jhhhM2hjhhubeh}(h]resync-info-updateah ]h"]6.3 resync_info_update()ah$]h&]uh1hhjhhhhhM0ubh)}(hhh](h)}(h#6.4 resync_start(), resync_finish()h]h#6.4 resync_start(), resync_finish()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM8ubj)}(hXNThese are called when resync/recovery/reshape starts or stops. They update the resyncing range in the bitmap lock and also send a RESYNCING message. resync_start reports the whole array as resyncing, resync_finish reports none of it. resync_finish() also sends a BITMAP_NEEDS_SYNC message which allows some other node to take over. h](h)}(hThese are called when resync/recovery/reshape starts or stops. They update the resyncing range in the bitmap lock and also send a RESYNCING message. resync_start reports the whole array as resyncing, resync_finish reports none of it.h]hThese are called when resync/recovery/reshape starts or stops. They update the resyncing range in the bitmap lock and also send a RESYNCING message. resync_start reports the whole array as resyncing, resync_finish reports none of it.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM:hjubh)}(haresync_finish() also sends a BITMAP_NEEDS_SYNC message which allows some other node to take over.h]haresync_finish() also sends a BITMAP_NEEDS_SYNC message which allows some other node to take over.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM?hjubeh}(h]h ]h"]h$]h&]uh1jhhhM:hjhhubeh}(h]resync-start-resync-finishah ]h"]#6.4 resync_start(), resync_finish()ah$]h&]uh1hhjhhhhhM8ubh)}(hhh](h)}(hO6.5 metadata_update_start(), metadata_update_finish(), metadata_update_cancel()h]hO6.5 metadata_update_start(), metadata_update_finish(), metadata_update_cancel()}(hj( hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj% hhhhhMCubj)}(hXmetadata_update_start is used to get exclusive access to the metadata. If a change is still needed once that access is gained, metadata_update_finish() will send a METADATA_UPDATE message to all other nodes, otherwise metadata_update_cancel() can be used to release the lock. h]h)}(hXmetadata_update_start is used to get exclusive access to the metadata. If a change is still needed once that access is gained, metadata_update_finish() will send a METADATA_UPDATE message to all other nodes, otherwise metadata_update_cancel() can be used to release the lock.h]hXmetadata_update_start is used to get exclusive access to the metadata. If a change is still needed once that access is gained, metadata_update_finish() will send a METADATA_UPDATE message to all other nodes, otherwise metadata_update_cancel() can be used to release the lock.}(hj: hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhj6 ubah}(h]h ]h"]h$]h&]uh1jhhhMEhj% hhubeh}(h]Cmetadata-update-start-metadata-update-finish-metadata-update-cancelah ]h"]O6.5 metadata_update_start(), metadata_update_finish(), metadata_update_cancel()ah$]h&]uh1hhjhhhhhMCubh)}(hhh](h)}(h6.6 area_resyncing()h]h6.6 area_resyncing()}(hjY hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjV hhhhhMLubj)}(hXThis combines two elements of functionality. Firstly, it will check if any node is currently resyncing anything in a given range of sectors. If any resync is found, then the caller will avoid writing or read-balancing in that range. Secondly, while node recovery is happening it reports that all areas are resyncing for READ requests. This avoids races between the cluster-filesystem and the cluster-RAID handling a node failure. h](h)}(h,This combines two elements of functionality.h]h,This combines two elements of functionality.}(hjk hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMNhjg ubh)}(hFirstly, it will check if any node is currently resyncing anything in a given range of sectors. If any resync is found, then the caller will avoid writing or read-balancing in that range.h]hFirstly, it will check if any node is currently resyncing anything in a given range of sectors. If any resync is found, then the caller will avoid writing or read-balancing in that range.}(hjy hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMPhjg ubh)}(hSecondly, while node recovery is happening it reports that all areas are resyncing for READ requests. This avoids races between the cluster-filesystem and the cluster-RAID handling a node failure.h]hSecondly, while node recovery is happening it reports that all areas are resyncing for READ requests. This avoids races between the cluster-filesystem and the cluster-RAID handling a node failure.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMUhjg ubeh}(h]h ]h"]h$]h&]uh1jhhhMNhjV hhubeh}(h]area-resyncingah ]h"]6.6 area_resyncing()ah$]h&]uh1hhjhhhhhMLubh)}(hhh](h)}(h?6.7 add_new_disk_start(), add_new_disk_finish(), new_disk_ack()h]h?6.7 add_new_disk_start(), add_new_disk_finish(), new_disk_ack()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhM[ubj)}(hXwThese are used to manage the new-disk protocol described above. When a new device is added, add_new_disk_start() is called before it is bound to the array and, if that succeeds, add_new_disk_finish() is called the device is fully added. When a device is added in acknowledgement to a previous request, or when the device is declared "unavailable", new_disk_ack() is called. h](h)}(hThese are used to manage the new-disk protocol described above. When a new device is added, add_new_disk_start() is called before it is bound to the array and, if that succeeds, add_new_disk_finish() is called the device is fully added.h]hThese are used to manage the new-disk protocol described above. When a new device is added, add_new_disk_start() is called before it is bound to the array and, if that succeeds, add_new_disk_finish() is called the device is fully added.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM]hj ubh)}(hWhen a device is added in acknowledgement to a previous request, or when the device is declared "unavailable", new_disk_ack() is called.h]hWhen a device is added in acknowledgement to a previous request, or when the device is declared “unavailable”, new_disk_ack() is called.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMbhj ubeh}(h]h ]h"]h$]h&]uh1jhhhM]hj hhubeh}(h]3add-new-disk-start-add-new-disk-finish-new-disk-ackah ]h"]?6.7 add_new_disk_start(), add_new_disk_finish(), new_disk_ack()ah$]h&]uh1hhjhhhhhM[ubh)}(hhh](h)}(h6.8 remove_disk()h]h6.8 remove_disk()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMgubj)}(hThis is called when a spare or failed device is removed from the array. It causes a REMOVE message to be send to other nodes. h]h)}(h~This is called when a spare or failed device is removed from the array. It causes a REMOVE message to be send to other nodes.h]h~This is called when a spare or failed device is removed from the array. It causes a REMOVE message to be send to other nodes.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMihj ubah}(h]h ]h"]h$]h&]uh1jhhhMihj hhubeh}(h] remove-diskah ]h"]6.8 remove_disk()ah$]h&]uh1hhjhhhhhMgubh)}(hhh](h)}(h6.9 gather_bitmaps()h]h6.9 gather_bitmaps()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhMmubj)}(hThis sends a RE_ADD message to all other nodes and then gathers bitmap information from all bitmaps. This combined bitmap is then used to recovery the re-added device. h]h)}(hThis sends a RE_ADD message to all other nodes and then gathers bitmap information from all bitmaps. This combined bitmap is then used to recovery the re-added device.h]hThis sends a RE_ADD message to all other nodes and then gathers bitmap information from all bitmaps. This combined bitmap is then used to recovery the re-added device.}(hj( hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMohj$ ubah}(h]h ]h"]h$]h&]uh1jhhhMohj hhubeh}(h]gather-bitmapsah ]h"]6.9 gather_bitmaps()ah$]h&]uh1hhjhhhhhMmubh)}(hhh](h)}(h06.10 lock_all_bitmaps() and unlock_all_bitmaps()h]h06.10 lock_all_bitmaps() and unlock_all_bitmaps()}(hjG hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjD hhhhhMtubj)}(hXThese are called when change bitmap to none. If a node plans to clear the cluster raid's bitmap, it need to make sure no other nodes are using the raid which is achieved by lock all bitmap locks within the cluster, and also those locks are unlocked accordingly. h]h)}(hXThese are called when change bitmap to none. If a node plans to clear the cluster raid's bitmap, it need to make sure no other nodes are using the raid which is achieved by lock all bitmap locks within the cluster, and also those locks are unlocked accordingly.h]hXThese are called when change bitmap to none. If a node plans to clear the cluster raid’s bitmap, it need to make sure no other nodes are using the raid which is achieved by lock all bitmap locks within the cluster, and also those locks are unlocked accordingly.}(hjY hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMvhjU ubah}(h]h ]h"]h$]h&]uh1jhhhMvhjD hhubeh}(h]'lock-all-bitmaps-and-unlock-all-bitmapsah ]h"]06.10 lock_all_bitmaps() and unlock_all_bitmaps()ah$]h&]uh1hhjhhhhhMtubeh}(h]module-interfaceah ]h"]6. module interfaceah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h7. Unsupported featuresh]h7. Unsupported features}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj} hhhhhM}ubh)}(h?There are somethings which are not supported by cluster MD yet.h]h?There are somethings which are not supported by cluster MD yet.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj} hhubj )}(hhh]j)}(hchange array_sectors.h]h)}(hj h]hchange array_sectors.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubah}(h]h ]h"]h$]h&]uh1jhj hhhhhNubah}(h]h ]h"]h$]h&]j_j`uh1j hhhMhj} hhubeh}(h]unsupported-featuresah ]h"]7. unsupported featuresah$]h&]uh1hhhhhhhhM}ubeh}(h] md-clusterah ]h"] md clusterah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerj error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(j j jzjwjBj?jjj jj:j7jjjjjjjjj'j$jjjjjj jAj>jjjjjjjjj jjz jw jmjjjjjjj" j jS jP j j j j j j jA j> jr jo j j u nametypes}(j jzjBjj j:jjjjj'jjjjAjjjjj jz jmjjj" jS j j j jA jr j uh}(j hjwhj?j}jjjjj7j jjEjjnjjjjj$jjj*jjj jj>jjjDjjjjjjjjjw jjjj?jjpjjj jjP j% j jV j j j j j> j jo jD j j} u footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.