[gpfsug-discuss] Rebalancing with mmrestripefs -P

david_johnson at brown.edu david_johnson at brown.edu
Mon Aug 20 19:06:23 BST 2018


Does anyone have a good rule of thumb for iops to allow for background QOS tasks?  



  -- ddj
Dave Johnson

> On Aug 20, 2018, at 2:02 PM, Frederick Stock <stockf at us.ibm.com> wrote:
> 
> That should do what you want.  Be aware that mmrestripefs generates significant IO load so you should either use the QoS feature to mitigate its impact or run the command when the system is not very busy.
> 
> Note you have two additional NSDs in the 33 failure group than you do in the 23 failure group.  You may want to change one of those NSDs in failure group 33 to be in failure group 23 so you have equal storage space in both failure groups.
> 
> Fred
> __________________________________________________
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821
> stockf at us.ibm.com
> 
> 
> 
> From:        David Johnson <david_johnson at brown.edu>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        08/20/2018 12:55 PM
> Subject:        [gpfsug-discuss] Rebalancing with mmrestripefs -P
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> 
> 
> 
> I have one storage pool that was recently doubled, and another pool migrated there using mmapplypolicy.
> The new half is only 50% full, and the old half is 94% full. 
> 
> Disks in storage pool: cit_10tb (Maximum disk size allowed is 516 TB)
> d05_george_23          50.49T       23 No       Yes          25.91T ( 51%)        18.93G ( 0%) 
> d04_george_23          50.49T       23 No       Yes          25.91T ( 51%)         18.9G ( 0%) 
> d03_george_23          50.49T       23 No       Yes           25.9T ( 51%)        19.12G ( 0%) 
> d02_george_23          50.49T       23 No       Yes           25.9T ( 51%)        19.03G ( 0%) 
> d01_george_23          50.49T       23 No       Yes           25.9T ( 51%)        18.92G ( 0%) 
> d00_george_23          50.49T       23 No       Yes          25.91T ( 51%)        19.05G ( 0%) 
> d06_cit_33             50.49T       33 No       Yes          3.084T (  6%)        70.35G ( 0%) 
> d07_cit_33             50.49T       33 No       Yes          3.084T (  6%)         70.2G ( 0%) 
> d05_cit_33             50.49T       33 No       Yes          3.084T (  6%)        69.93G ( 0%) 
> d04_cit_33             50.49T       33 No       Yes          3.085T (  6%)        70.11G ( 0%) 
> d03_cit_33             50.49T       33 No       Yes          3.084T (  6%)        70.08G ( 0%) 
> d02_cit_33             50.49T       33 No       Yes          3.083T (  6%)         70.3G ( 0%) 
> d01_cit_33             50.49T       33 No       Yes          3.085T (  6%)        70.25G ( 0%) 
> d00_cit_33             50.49T       33 No       Yes          3.083T (  6%)        70.28G ( 0%) 
>                 -------------                         -------------------- -------------------
> (pool total)           706.9T                                180.1T ( 25%)        675.5G ( 0%)
> 
>  Will the command "mmrestripfs /gpfs -b -P cit_10tb”  move the data blocks from the _cit_ NSDs to the _george_ NSDs,
> so that they end up all around 75% full?
> 
> Thanks,
>  — ddj
> Dave Johnson
> Brown University CCV/CIS_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180820/cadb5a0b/attachment.htm>


More information about the gpfsug-discuss mailing list