[gpfsug-discuss] replication and no failure groups

J. Eric Wonderley eric.wonderley at vt.edu
Mon Jan 9 21:01:14 GMT 2017


Hi Yuran:

We have 5...4x md3860fs and 1x if150.

the if150 requires data replicas=2 to get the ha and protection they
recommend.  we have it presented in a fileset that appears in a users work
area.

On Mon, Jan 9, 2017 at 3:53 PM, Yaron Daniel <YARD at il.ibm.com> wrote:

> Hi
>
> So - do u able to have GPFS replication for the MD Failure Groups ?
>
> I can see that u have 3 Failure Groups for Data -1, 2012,2034 , how many
> Storage Subsystems you have ?
>
>
>
>
> Regards
>
>
>
> ------------------------------
>
>
>
> *Yaron Daniel*  94 Em Ha'Moshavot Rd
> *Server, **Storage and Data Services*
> <https://w3-03.ibm.com/services/isd/secure/client.wss/Somt?eventType=getHomePage&somtId=115>*-
> Team Leader*   Petach Tiqva, 49527
> *Global Technology Services*  Israel
> Phone: +972-3-916-5672 <+972%203-916-5672>
> Fax: +972-3-916-5672 <+972%203-916-5672>
> Mobile: +972-52-8395593 <+972%2052-839-5593>
> e-mail: yard at il.ibm.com
> *IBM Israel* <http://www.ibm.com/il/he/>
>
>
>
>
>
>
>
> From:        "J. Eric Wonderley" <eric.wonderley at vt.edu>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        01/09/2017 10:48 PM
> Subject:        Re: [gpfsug-discuss] replication and no failure groups
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hi Yaron:
>
> This is the filesystem:
>
> [root at cl005 net]# mmlsdisk work
> disk         driver   sector     failure holds
> holds                            storage
> name         type       size       group metadata data  status
> availability pool
> ------------ -------- ------ ----------- -------- ----- -------------
> ------------ ------------
> nsd_a_7      nsd         512          -1 No       Yes   ready
> up           system
> nsd_b_7      nsd         512          -1 No       Yes   ready
> up           system
> nsd_c_7      nsd         512          -1 No       Yes   ready
> up           system
> nsd_d_7      nsd         512          -1 No       Yes   ready
> up           system
> nsd_a_8      nsd         512          -1 No       Yes   ready
> up           system
> nsd_b_8      nsd         512          -1 No       Yes   ready
> up           system
> nsd_c_8      nsd         512          -1 No       Yes   ready
> up           system
> nsd_d_8      nsd         512          -1 No       Yes   ready
> up           system
> nsd_a_9      nsd         512          -1 No       Yes   ready
> up           system
> nsd_b_9      nsd         512          -1 No       Yes   ready
> up           system
> nsd_c_9      nsd         512          -1 No       Yes   ready
> up           system
> nsd_d_9      nsd         512          -1 No       Yes   ready
> up           system
> nsd_a_10     nsd         512          -1 No       Yes   ready
> up           system
> nsd_b_10     nsd         512          -1 No       Yes   ready
> up           system
> nsd_c_10     nsd         512          -1 No       Yes   ready
> up           system
> nsd_d_10     nsd         512          -1 No       Yes   ready
> up           system
> nsd_a_11     nsd         512          -1 No       Yes   ready
> up           system
> nsd_b_11     nsd         512          -1 No       Yes   ready
> up           system
> nsd_c_11     nsd         512          -1 No       Yes   ready
> up           system
> nsd_d_11     nsd         512          -1 No       Yes   ready
> up           system
> nsd_a_12     nsd         512          -1 No       Yes   ready
> up           system
> nsd_b_12     nsd         512          -1 No       Yes   ready
> up           system
> nsd_c_12     nsd         512          -1 No       Yes   ready
> up           system
> nsd_d_12     nsd         512          -1 No       Yes   ready
> up           system
> work_md_pf1_1 nsd         512         200 Yes      No    ready
> up           system
> jbf1z1       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf2z1       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf3z1       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf4z1       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf5z1       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf6z1       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf7z1       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf8z1       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf1z2       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf2z2       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf3z2       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf4z2       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf5z2       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf6z2       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf7z2       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf8z2       nsd        4096        2012 No       Yes   ready
> up           sas_ssd4T
> jbf1z3       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf2z3       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf3z3       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf4z3       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf5z3       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf6z3       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf7z3       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf8z3       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf1z4       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf2z4       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf3z4       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf4z4       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf5z4       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf6z4       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf7z4       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> jbf8z4       nsd        4096        2034 No       Yes   ready
> up           sas_ssd4T
> work_md_pf1_2 nsd         512         200 Yes      No    ready
> up           system
> work_md_pf1_3 nsd         512         200 Yes      No    ready
> up           system
> work_md_pf1_4 nsd         512         200 Yes      No    ready
> up           system
> work_md_pf2_5 nsd         512         199 Yes      No    ready
> up           system
> work_md_pf2_6 nsd         512         199 Yes      No    ready
> up           system
> work_md_pf2_7 nsd         512         199 Yes      No    ready
> up           system
> work_md_pf2_8 nsd         512         199 Yes      No    ready
> up           system
> [root at cl005 net]# mmlsfs work -R -r -M -m -K
> flag                value                    description
> ------------------- ------------------------ ------------------------------
> -----
>  -R                 2                        Maximum number of data
> replicas
>  -r                 2                        Default number of data
> replicas
>  -M                 2                        Maximum number of metadata
> replicas
>  -m                 2                        Default number of metadata
> replicas
>  -K                 whenpossible             Strict replica allocation
> option
>
>
> On Mon, Jan 9, 2017 at 3:34 PM, Yaron Daniel <*YARD at il.ibm.com*
> <YARD at il.ibm.com>> wrote:
> Hi
>
> 1) Yes in case u have only 1 Failure group - replication will not work.
>
> 2) Do you have 2 Storage Systems ?  When using GPFS replication write stay
> the same - but read can be double - since it read from 2 Storage systems
>
> Hope this help - what do you try to achive , can you share your env setup ?
>
>
> Regards
>
>
>
> ------------------------------
>
>
>
> *Yaron Daniel*  94 Em Ha'Moshavot Rd
> *Server, **Storage and Data Services*
> <https://w3-03.ibm.com/services/isd/secure/client.wss/Somt?eventType=getHomePage&somtId=115>*-
> Team Leader*   Petach Tiqva, 49527
> *Global Technology Services*  Israel
> Phone: *+972-3-916-5672* <+972%203-916-5672>
> Fax: *+972-3-916-5672* <+972%203-916-5672>
> Mobile: *+972-52-8395593* <+972%2052-839-5593>
> e-mail: *yard at il.ibm.com* <yard at il.ibm.com>
> *IBM Israel* <http://www.ibm.com/il/he/>
>
>
>
>
>
>
>
> From:        Brian Marshall <*mimarsh2 at vt.edu* <mimarsh2 at vt.edu>>
> To:        gpfsug main discussion list <*gpfsug-discuss at spectrumscale.org*
> <gpfsug-discuss at spectrumscale.org>>
> Date:        01/09/2017 10:17 PM
> Subject:        [gpfsug-discuss] replication and no failure groups
> Sent by:        *gpfsug-discuss-bounces at spectrumscale.org*
> <gpfsug-discuss-bounces at spectrumscale.org>
>
> ------------------------------
>
>
>
>
> All,
>
> If I have a filesystem with replication set to 2 and 1 failure group:
>
> 1) I assume replication won't actually happen, correct?
>
> 2) Will this impact performance i.e cut write performance in half even
> though it really only keeps 1 copy?
>
> End goal - I would like a single storage pool within the filesystem to be
> replicated without affecting the performance of all other pools(which only
> have a single failure group)
>
> Thanks,
> Brian Marshall
> VT - ARC_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at *spectrumscale.org* <http://spectrumscale.org/>
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at *spectrumscale.org* <http://spectrumscale.org/>
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170109/3f993502/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 1851 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170109/3f993502/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 1851 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170109/3f993502/attachment-0001.gif>


More information about the gpfsug-discuss mailing list