[gpfsug-discuss] replication and no failure groups

Brian Marshall mimarsh2 at vt.edu
Tue Jan 10 13:24:33 GMT 2017


That`s the answer.  We hadn`t read deep enough and just assumed that -1
meant  default failure group or no failure groups at all.

Thanks,
Brian

On Mon, Jan 9, 2017 at 5:24 PM, Jan-Frode Myklebust <janfrode at tanso.net>
wrote:

> Yaron, doesn't "-1" make each of these disk an independent failure group?
>
> From 'man mmcrnsd':
>
> "The default is -1, which indicates this disk has no point of failure in
> common with any other disk."
>
>
>   -jf
>
>
> man. 9. jan. 2017 kl. 21.54 skrev Yaron Daniel <YARD at il.ibm.com>:
>
>> Hi
>>
>> So - do u able to have GPFS replication
>>
>> for the MD Failure Groups ?
>>
>> I can see that u have 3 Failure Groups
>>
>> for Data -1, 2012,2034 , how many Storage Subsystems you have ?
>>
>>
>>
>>
>> Regards
>>
>>
>>
>> ------------------------------
>>
>>
>>
>>
>>
>> *YaronDaniel*  94
>>
>> Em Ha'Moshavot Rd
>>
>>
>> *Server,*
>>
>> *Storageand Data Services*
>> <https://w3-03.ibm.com/services/isd/secure/client.wss/Somt?eventType=getHomePage&somtId=115>*-
>> Team Leader*
>>
>>    Petach
>>
>> Tiqva, 49527
>>
>>
>> *GlobalTechnology Services*  Israel
>> Phone: +972-3-916-5672 <+972%203-916-5672>
>> Fax: +972-3-916-5672 <+972%203-916-5672>
>>
>>
>> Mobile: +972-52-8395593 <+972%2052-839-5593>
>>
>>
>> e-mail: yard at il.ibm.com
>>
>>
>>
>>
>> *IBMIsrael* <http://www.ibm.com/il/he/>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> From:
>>
>>  "J. Eric Wonderley"
>>
>> <eric.wonderley at vt.edu>
>>
>>
>> To:
>>
>>  gpfsug main discussion
>>
>> list <gpfsug-discuss at spectrumscale.org>
>>
>> Date:
>>
>>  01/09/2017 10:48 PM
>> Subject:
>>
>>    Re: [gpfsug-discuss]
>>
>> replication and no failure groups
>> Sent by:
>>
>>    gpfsug-discuss-bounces at spectrumscale.org
>> ------------------------------
>>
>>
>>
>> Hi Yaron:
>>
>> This is the filesystem:
>>
>> [root at cl005 net]# mmlsdisk work
>> disk         driver
>>
>> sector     failure holds    holds
>>
>> storage
>> name         type
>>
>> size       group metadata data  status
>>
>> availability pool
>> ------------ -------- ------ ----------- -------- ----- -------------
>> ------------
>>
>> ------------
>> nsd_a_7      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_b_7      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_c_7      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_d_7      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_a_8      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_b_8      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_c_8      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_d_8      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_a_9      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_b_9      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_c_9      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_d_9      nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_a_10     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_b_10     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_c_10     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_d_10     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_a_11     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_b_11     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_c_11     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_d_11     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_a_12     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_b_12     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_c_12     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> nsd_d_12     nsd
>>
>> 512          -1 No
>>
>> Yes   ready         up
>>
>> system
>> work_md_pf1_1 nsd         512
>>
>> 200 Yes      No    ready
>>
>> up           system
>>
>>
>> jbf1z1       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf2z1       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf3z1       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf4z1       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf5z1       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf6z1       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf7z1       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf8z1       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf1z2       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf2z2       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf3z2       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf4z2       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf5z2       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf6z2       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf7z2       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf8z2       nsd
>>
>> 4096        2012 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf1z3       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf2z3       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf3z3       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf4z3       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf5z3       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf6z3       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf7z3       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf8z3       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf1z4       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf2z4       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf3z4       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf4z4       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf5z4       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf6z4       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf7z4       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> jbf8z4       nsd
>>
>> 4096        2034 No
>>
>> Yes   ready         up
>>
>> sas_ssd4T
>> work_md_pf1_2 nsd         512
>>
>> 200 Yes      No    ready
>>
>> up           system
>>
>>
>> work_md_pf1_3 nsd         512
>>
>> 200 Yes      No    ready
>>
>> up           system
>>
>>
>> work_md_pf1_4 nsd         512
>>
>> 200 Yes      No    ready
>>
>> up           system
>>
>>
>> work_md_pf2_5 nsd         512
>>
>> 199 Yes      No    ready
>>
>> up           system
>>
>>
>> work_md_pf2_6 nsd         512
>>
>> 199 Yes      No    ready
>>
>> up           system
>>
>>
>> work_md_pf2_7 nsd         512
>>
>> 199 Yes      No    ready
>>
>> up           system
>>
>>
>> work_md_pf2_8 nsd         512
>>
>> 199 Yes      No    ready
>>
>> up           system
>>
>>
>> [root at cl005 net]# mmlsfs work -R -r -M -m -K
>> flag
>>
>> value
>>
>> description
>> ------------------- ------------------------
>> -----------------------------------
>>  -R
>>
>> 2
>>
>> Maximum number of data replicas
>>  -r
>>
>> 2
>>
>> Default number of data replicas
>>  -M
>>
>> 2
>>
>> Maximum number of metadata replicas
>>  -m
>>
>> 2
>>
>> Default number of metadata replicas
>>  -K
>>
>> whenpossible
>>
>> Strict replica allocation option
>>
>>
>> On Mon, Jan 9, 2017 at 3:34 PM, Yaron Daniel <*YARD at il.ibm.com*
>> <YARD at il.ibm.com>>
>>
>> wrote:
>> Hi
>>
>> 1) Yes in case u have only 1 Failure group - replication will not work.
>>
>> 2) Do you have 2 Storage Systems ?  When using GPFS replication write
>>
>> stay the same - but read can be double - since it read from 2 Storage
>> systems
>>
>> Hope this help - what do you try to achive , can you share your env setup
>>
>> ?
>>
>>
>> Regards
>>
>>
>>
>> ------------------------------
>>
>>
>>
>>
>>
>> *YaronDaniel*  94
>>
>> Em Ha'Moshavot Rd
>>
>>
>> *Server,*
>>
>> *Storageand Data Services*
>> <https://w3-03.ibm.com/services/isd/secure/client.wss/Somt?eventType=getHomePage&somtId=115>
>>
>> *-Team Leader*   Petach
>>
>> Tiqva, 49527
>>
>>
>> *GlobalTechnology Services*  Israel
>> Phone: *+972-3-916-5672* <+972%203-916-5672>
>> Fax: *+972-3-916-5672* <+972%203-916-5672>
>>
>>
>> Mobile: *+972-52-8395593* <+972%2052-839-5593>
>>
>>
>> e-mail: *yard at il.ibm.com* <yard at il.ibm.com>
>>
>>
>>
>>
>> *IBMIsrael* <http://www.ibm.com/il/he/>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> From:        Brian
>>
>> Marshall <*mimarsh2 at vt.edu* <mimarsh2 at vt.edu>>
>> To:        gpfsug
>>
>> main discussion list <*gpfsug-discuss at spectrumscale.org*
>> <gpfsug-discuss at spectrumscale.org>>
>> Date:        01/09/2017
>>
>> 10:17 PM
>> Subject:        [gpfsug-discuss]
>>
>> replication and no failure groups
>> Sent by:        *gpfsug-discuss-bounces at spectrumscale.org*
>> <gpfsug-discuss-bounces at spectrumscale.org>
>>
>> ------------------------------
>>
>>
>>
>>
>> All,
>>
>> If I have a filesystem with replication set to 2 and 1 failure group:
>>
>> 1) I assume replication won't actually happen, correct?
>>
>> 2) Will this impact performance i.e cut write performance in half even
>>
>> though it really only keeps 1 copy?
>>
>> End goal - I would like a single storage pool within the filesystem to
>>
>> be replicated without affecting the performance of all other pools(which
>>
>> only have a single failure group)
>>
>> Thanks,
>> Brian Marshall
>> VT - ARC_______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at *spectrumscale.org* <http://spectrumscale.org/>
>> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
>> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>>
>>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at *spectrumscale.org* <http://spectrumscale.org/>
>> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
>> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
>>
>>
>>
>> _______________________________________________
>>
>> gpfsug-discuss mailing list
>>
>> gpfsug-discuss at spectrumscale.org
>>
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170110/4387f698/attachment.htm>


More information about the gpfsug-discuss mailing list