[gpfsug-discuss] replication and no failure groups

Yaron Daniel YARD at il.ibm.com
Mon Jan 9 20:34:29 GMT 2017


Hi

1) Yes in case u have only 1 Failure group - replication will not work.

2) Do you have 2 Storage Systems ?  When using GPFS replication write stay 
the same - but read can be double - since it read from 2 Storage systems

Hope this help - what do you try to achive , can you share your env setup 
?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
yard at il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   Brian Marshall <mimarsh2 at vt.edu>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   01/09/2017 10:17 PM
Subject:        [gpfsug-discuss] replication and no failure groups
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



All,

If I have a filesystem with replication set to 2 and 1 failure group:

1) I assume replication won't actually happen, correct?

2) Will this impact performance i.e cut write performance in half even 
though it really only keeps 1 copy?

End goal - I would like a single storage pool within the filesystem to be 
replicated without affecting the performance of all other pools(which only 
have a single failure group)

Thanks,
Brian Marshall
VT - ARC_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170109/c43e024f/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 1851 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170109/c43e024f/attachment.gif>


More information about the gpfsug-discuss mailing list